Your AI Chat May Be a Liability: Risks Investors Are Starting to NoticeMarch 12, 2026 In the last three weeks, I’ve attended an IP Summit and a local Iowa Bio meeting, both of which were heavily focused on AI, as everything is lately. With the exponential focus on the use of AI to improve productivity, automation, and innovation, also comes legitimate caution for founders and R&D teams. These artificial intelligence tools (especially chatbots and large language models) are everywhere now and often within existing software. They are quick, easy to use, and often generate impressively polished text in seconds. At Iowa Bio, conversations were heavy with examples of academic and commercial teams using AI both for ideation and work product generation. That convenience, however, comes with real intellectual property (and other legal) risks. And in practice, AI output is often incomplete and misleading, in the ways that matter most. The AI tool is not your lawyer Unlike communications with counsel, conversations with AI tools are NOT privileged. Unfortunately, documents created outside a privileged relationship do not become protected simply because they are later shared with a lawyer. AI is not your lawyer, even if you prompt it to act like one, and current lawsuits and proposed legislation are evolving to define this more clearly. Specific to IP, patent rights may be blocked by public disclosure, particularly if rights outside the United States are desired. If you use AI to generate an invention summary or iterate technical concepts, you may be creating a record that is stored, logged, or otherwise discoverable. What tool are you using? What is the security of that platform? Are the conversations available for training the model? What do the current data privacy and security terms for the systems you are using look like? Depending on the answer to those questions, once that disclosure is made, containing it may be impossible. It is long-standing U.S. law that if an invention is disclosed before a patent application is filed, the consequences can be serious including loss of foreign patent rights, enforceability problems, and costly complications. At Iowa Bio, counsel advised that they are treating conversations inventors have with AI tools as a “public disclosure” under U.S. patent law – which could serve as a barrier to obtaining patent protection. AI-generated materials are often wrong in critical ways There is also the problem of accuracy. Generative AI systems are known to hallucinate, even with today’s models, and output is very much subject to the appropriate robustness of the prompt and underlying data provided to these systems. In other words, they produce language that sounds authoritative and polished, and cite references real and/or imagined, while still being 100% wrong. If you are building businesses on innovation, this matters a great deal. When that language then finds its way into a patent application, internal invention records, a marketing piece, or unfortuately even an investor deck, it creates inconsistencies that can become expensive problems and increase corporate risk. At worst, it can impact patent prosecution, patent validity, cause missteps with regulatory compliance, and negatively impact valuation. AI use can raise inventorship and ownership questions There is another layer here that is easy to miss: inventorship. Inventorship must be based on those who actually conceived the claimed invention. At the IP Summit, much time was given to discussing the role AI tools can play in the discovery process and whether it acts as a lab technician or rises to the level of conception; and the resulting consequences on the legal determination of inventorship. When inventors use AI to fill in gaps, restructure technical concepts, or suggest solutions, that requires careful analysis claim by claim to determine who the actual inventor is.. Current law does not recognize AI as an inventor, but that does not mean the issue goes away. A challenger may still argue that the named inventors did not fully conceive the invention as ultimately claimed; particularly if AI-generated content introduced new elements that were not independently developed by the human inventors. That is exactly the kind of argument companies do not want to be defending years later. Investors are watching Because of the above issues, investors (VC, PE, and angel) are asking: What in your pitch deck was generated by AI? What human validation verifies those statements? Does the company have clear guardrails on AI use in R&D and product development? Are AI tools used only in data-privacy-controlled, approved environments? Are inventions discussed with counsel before being shared with third parties including chatbots? Has management considered how AI‑generated materials could appear in diligence or litigation? Good news: We still have to use our brains! One reason these tools are so tempting is that the output often looks finished. It reads well. It sounds smart, and it feels complete. But in many contexts, including legal matters, this is hugely misleading. For example, AI-generated invention summaries often miss the most important parts of the analysis. They may fail to clearly identify what is actually inventive, fail to distinguish the invention from prior art, or fail to frame the innovation in a way that supports patentability. They can be useful as a rough starting point in the right environment, but they are not a substitute for informed human judgment. At the IP Summit I attended, a major publicly traded company stated they have serious issues with inventors using AI systems to write invention disclosures, for this very reason. They stated that much time is spent going back and asking the inventors to parse out, “how much of this did you actually do, vs. what the AI summary says you did”? In my own practice, I’ve seen AI-generated contracts that looked official but had incorrect and inconsistent terms and did not cover the intended relationship between the parties. I’ve seen literal AI to AI back and forth correspondence where each side is clearly throwing documents into a tool and asking it to review and ask questions of the other side that undermine the parties’ credibility (questions which make no sense given the facts at hand, but that “sounded smart”). I’ve seen it advise that FDA pre-market authorization is needed for a veterinary medical device and cite statutes and regulations – that were written for human medical devices, and it came to a totally wrong conclusion. I’ve seen valuations based on incorrect assumptions on the breadth and strength of IP that were fed into a model as fact, that then suggested outlandish royalty rates and were taken as authoritative because the AI gave a quantitative range with analytical support. When I get these documents that are clearly AI output, often still authored by “python-X”, it is a red flag and an opportunity to reground in business goals and factual context. It isn’t a matter of whether or not to use AI, as it becomes baked into systems we already interact with every day. Personally, I have a team of Openclaw agents on their dedicated Mac studio who order groceries with an occasional surprise candy bar, help set up church project website and build family apps, analyze cross-platform fitness data, run home automation, and send nightly Bible verses with a reflection on the day’s events. Our world is rapidly changing and unless some public crisis event demands immediate regulation, AI is likely to play an increased role in how we operate personally and professionally. To protect your innovation and grow your business, don’t get caught in the cross-fire by following a few best practices: Treat public AI tools as public forums and not confidential workspaces Use AI only in controlled, approved environments and only for limited, non‑substantive tasks. Always treat output as draft until it receives human review by someone with domain expertise. Avoid discussing inventions, algorithms, technical details, or legal risks with chatbots before consulting counsel. Involve counsel early so communications are protected from the start. Do not rely on AI to draft key documents including: Invention summaries intended for patent filings Regulatory analysis which might generate discoverable records that will cause problems later on Financial projections intended for investors AI is a powerful productivity tool, but it does not come with legal protections, and it can be a credible liar. Understanding how to best implement this technology alongside these risks can make the difference between using this technology to drive innovation versus using it to compound corporate liability. Cassie J. Edgar is a Partner and Patent Attorney. She is Chair of the AI Committee, Licensing and Regulatory Law practice groups and advises clients in IP, regulatory law, and licensing including matters with USDA, FDA, and EPA. Cassie is also Co-Chair of the Data Privacy and Cybersecurity practice group. For additional information, please contact Cassie directly via e-mail at cassie.edgar@ipmvs.com. Please seek consultation for specific inquiries as this publication provides overview data only and does not provide legal advice. ← Return to Filewrapper