Natural disasters, like wildfires, floods, and earthquakes, are tragic events that can harm people, damage property, and strain society in complex ways. Emergency relief organizations exist to mitigate these problems. For our project, we attempt to reduce the strain on society of natural disasters by facilitating the assessment of damage to property, specifically real estate. Our app gives assessors, who are associated either with a relief organization or an insurance company, the means to automatically generate damage assessment reports for residential properties.
The assessor arrives at a residential parcel, takes a picture, and after confirming the relevant address, our app returns a damage assessment report complete with:
- before image, which is an interactive Google Street View panorama, giving the user
- after image, which is the photo taken by the assessor
- category of the damage source (e.g., fire, flood, earthquake),
- and summary details of the property (e.g., property square footage).
By using our application to generate damage assessment reports, value is provided to three stakeholders:
- Insurance claim assessors benefit from a streamlined workflow
- Policyholders benefit from faster payment adjudication
- Emergency relief organizations (nonprofits or governmental agencies) benefit from greater insight into damage intensity and efficacy of preventative measures
Our prototype involves the following APIs:
- You will need to provide your own API keys, make them unrestricted, and modify credentials file path (in app.py) used for API calls.
- The team was unable to get a singular photo to demonstrate all functionality of the application, so we provide two examples:
- Example 1 was taken in-person on the day before the report was generated, which successfully returns the before image of the property but fails to classify damage type because Google Vision AI underperforms on new images.
- Example 2 was photographed from a computer screen, which successfully returns damage type but fails to return the before image of the property because image location metadata corresponds to computer location, not damage site.
- Limitations:
- Our prototype does not yet support Apple HEIC format (default for iPhone), users must select JPEG as output format (Settings -> Camera -> Format -> "Most Compatible").
- Our prototype requires that searched properties are included in Zillow database (e.g., commercial property will throw an error)
- Our prototype requires that geolocation (latitude/longitude) is included in image file metadata.
- Option to manually input address information on the Address Validation page is not yet functional
Contents:
- Input: User takes in-app photo of damage site
- Action: Image is passed to Google Vision AI API; latitude/longitude information is extracted from image file and passed to Google Reverse Geocoding API
- Outputs:
- New report is generated, photo is attached to report as the after photo
- Google Vision AI returns type of damage incurred, attached to report as "Damage Source" (e.g., fire or flood)
- Google Reverse Geocoding API returns a list of street addresses for nearest properties
- Input: User selects relevant address from the list output in Step 1 (NOTE: user can manually input address if not included in list)
- Action: Address is passed to two APIs: Google Street View Static and Zillow Get Deep Search Results.
- Outputs:
- Google Street View Static API returns an image of the property prior to the disaster, attached to report as before photo
- Zillow Get Deep Search Results API returns property information, attached to report as "Property Details"
Current state: The final output of our application is an automatically generated, pre-filled report with information relevant for claims assessment. Whether a claim gets approved and the amount of reimbursement still needs to be manually determined by the assessor.
Future state: If we create a database of reports after they are completed by the assessor,* we will have before/after photos associated with damage source that are labeled with reimbursement amounts. This database can support regression analysis to provide estimated reimbursement amount for future reports. These reimbursement estimates can be used:
- by assessors as a benchmark for future reimbursement determinations
- by relief organizations (e.g., FEMA) to assess damage intensity by area and efficacy of local preparation measures, which enables more efficient and timely allocation of resources for prevention and relief.
* The reports in this database must exclude Zillow information, per its Terms of Service prohibiting the storage of their data.
Current state: Our application automatically generates a damage-assessment report upon capturing a photo. This means that each picture taken prompts a completely new report, which precludes multiple photographs being attached to a singular report.
Future state: Suppose there is exterior damage and interior damage, or damage to the garage as well as the house, a complete damage assessment report would require multiple photographs. To do this, ___
Current state: Our application is currently limited to exterior damage. If we could access before photographs of the property interior, we could extend our functionality to interior damage.
Future state: Zillow has access to interior images of properties, but does not provide them via API per its terms of service. Were we to partner with Zillow and satisfy their legal and business concerns associated with interior images, we could use them to extend our functionality to interior damage assessment.
Current state: For context, our application provides property details to assessors using data from Zillow, which is limited to residential real estate.
Future state: To extend application functionality to commercial, industrial, agricultural real estate, we would need to obtain property details from another source.
Current state: We do not have access to insurance policies for properties. Many insurance policies, for example, exclude flood damage from coverage.
Future state: If we partnered with insurance companies, we could include this information in reports.
Current state: Our application uses Google Vision AI to classify the source of damage, which incurs fees with use.
Future state: If we substituted a proprietary image-classification algorithm here, we can avoid these fees.
Terms of Use for Google APIs (Maps APIs, Vision AI):
- Terms of Use and Privacy Policy for our application must be publicly available.
- It must be explicitly stated in our application's Terms of Use that by using our application, users are bound by Google’s Terms of Service.
- It must be noted in Privacy Policy that we are using the Google API(s), and incorporate by reference the Google Privacy Policy.
Terms of use for Zillow API:
- Zillow data must not constitute the primary functionality nor the majority of content on mobile apps
- We may not retain copies of Zillow data
- We may not use Zillow data exclusively on the backend
- We must adhere to branding guidelines wherever Zillow data is present
Include disclaimers (e.g., sections 7,8 on Zillow Terms of Use)
0–1,000 | 1,000–5,000,000 | 5,000,000+ |
---|---|---|
Free | 0.0015 USD per each (1.50 USD per 1000) | 0.001 USD per each (1.00 USD per 1000) |
0–100,000 | 100,001–500,000 | 500,000+ |
---|---|---|
0.005 USD per each (5.00 USD per 1000) | 0.004 USD per each (4.00 USD per 1000) | Contact Sales for volume pricing |
0–100,000 | 100,001–500,000 | 500,000+ |
---|---|---|
0.014 USD per each (14.00 USD per 1000) | 0.0112 USD per each (11.20 USD per 1000) | Contact Sales for volume pricing |
Mikhail Lenko: established process to retrieve desired information from Google Vision AI and Zillow Get Deep Search APIs using Python, wrote ReadMe. Ryan Leyba: prototyped function for extracting geolocation data from smart phone image files using Python, created presentation deck. Kai Zhao: established process to retrieve desired information from Google Maps APIs, integrated backend code using Python Flask, built front-end user interface with HTML/CSS.
And special thanks to Alison Norris for front-end consultation and James Huang for front- and backend consultation.