One can hardly turn on the television today without being inundated by artificial intelligence (AI), whether in the form of commercials during the big game, as the lead segment on the news, or through advertisements on your favorite podcast, or elsewhere. AI is all around us, and yet, its practical uses in everyday life – and in the real estate development and construction world – are still in flux.
While some developers and contractors have already implemented AI into their operations, others continue to sit on the sidelines, waiting and watching to see how things play out before jumping in.
Potential uses for AI in real estate development + construction
The potential uses for AI in real estate development and construction are almost as endless as one’s imagination. For instance, AI can be used to assist with or double-check project designs, project administration, schedule revisions, inspections, or to summarize meeting notes from toolbox talks or other meetings. Using AI to assist with designs can relieve architects or engineers from mundane tasks. It can be used to create initial designs and drafts of drawings, and can even be used to cross-check the cost of materials, the source of materials, the impact of any taxes or tariffs, the constructability of a design or a given design’s energy efficiency.
AI may also be helpful in streamlining legal disputes. While AI cannot replace an advocate in the courtroom or render legal decisions and judgments, other uses are readily apparent. For example, AI could be used to analyze and summarize the voluminous documentation that is generated on a construction project every day (and it seems that the volume of documentation is ever-increasing) or to review and identify a handful of key, operative documents out of a sea of thousands (if not tens of thousands). By employing AI in this way, legal disputes could become more cost-efficient and quicker to resolve.
Risks associated with AI
But these benefits are not without risk. As it stands, the most appreciable risks with implementing AI in real estate development and construction are the risks that are more generally associated with implementing AI in any business. Those risks include hallucinations (i.e., AI is wrong or makes a mistake), training deficiencies (most AI models need to be trained – what happens if those tasked with training the model are mistaken or wrong?) and the potential for user error or user abuse. Regardless of how a hallucination or error occurs, the end result is the same: a mistake has been made that is in need of correction. But who should shoulder the blame for a mistake made by AI? The user? The AI vendor? Someone else?
Issues with assigning blame can be particularly troublesome because software like AI often comes with what is referred to as a “clickwrap agreement.” That is, the purchaser or user of the AI software must first “click through” a set of terms and conditions, noting acceptance of the terms and conditions (by clicking an “I accept” button), before the software can be used. Such clickwrap agreements are generally enforceable, meaning that the terms and conditions contained therein are also enforceable. One common term included in clickwrap agreements is a limitation of liability (also known as a damages cap). Oftentimes, these limitations of liability set forth a very low amount of damages that can be recovered from the AI vendor, such as a pre-set dollar figure or the cost of the software (which of course can vary). So, what happens if AI hallucinates and is responsible for a six-figure design bust, but the clickwrap agreement limits the AI vendor’s liability to, say, $10,000? Further complicating the issue, what if the project owner required the design team or contractor to use the AI that caused error? Who should be responsible for the delta between the clickwrap agreement’s limit of liability and the actual costs incurred to correct the error?
Mitigating risk through clear contractual provisions
It is essential that real estate development and construction contracts account for the emergence of AI. Key issues to consider include who will shoulder the cost for an AI hallucination or what role, if any, might AI play in assisting with dispute resolution. If a project owner requires that AI be used to assist with the design, then it stands to reason that the project owner should be responsible for any costs incurred to remedy the AI error.
Alternatively, the contract could provide that if a project participant decides to employ AI, then it is solely responsible for any costs incurred to correct an AI error. Or, for dispute resolution, will the parties agree to using a certain AI platform to assist with the review of voluminous project records in an effort to keep costs down? If yes, then such an agreement should be memorialized in the parties’ contract. When drafting such provisions, it is important to remember that clarity is king.
Identifying these issues upfront and determining how they will be addressed will allow all parties to go into the project with “eyes wide open.” That is, each party knows the allocation of risk for a given scenario, thereby enabling them to better plan against, or mitigate, that risk.
