Use of AI in the Legal Industry Faces Challenges

The legal industry’s widespread adoption of generative artificial intelligence (AI) will necessarily involve recalibrating workforce roles and skills and reckoning with varying degrees of readiness and trust among professionals within and across industries. Coupled with the challenges of this reboot are the thorny issues surrounding security, privacy and ethics.

Additionally, the belief that generative AI will revolutionize efficiency for so much legal work is tempered somewhat by the majority of legal professionals who have reservations about the industry’s current preparedness for this AI-driven future.

Compounding the challenges of actionably preparing the industry is the current lack of industrywide consensus about the merits of generative AI. While the evangelists are deeply optimistic that generative AI will have positive impacts on the profession, they currently represent only about half of professionals.

As the industry wrestles with these as-yet unresolved issues, the road to widespread adoption of AI in the legal industry remains neither straightforward nor universally agreed upon.

Shifting gears: Changes in the legal industry’s workforce for an AI-enabled future

Many legal players nonetheless are restructuring their practices to accommodate the use of generative AI and, in doing so, are changing work dynamics and skill requirements. This has led to concerns that roles traditionally performed by humans are at risk. More than two-thirds of law professionals recently mused that roles responsible for much of the life cycle of knowledge management and research in the industry could be replaced by generative AI. However, these same respondents also voiced skepticism about the technology’s ability to perform high-level legal work such as facilitating corporate restructuring or navigating international trade disputes.

The potential challenges to traditional roles, however, simultaneously present counterbalancing opportunities: the rising demand for both specialized AI skills and industry-specific technologies. Law firms are increasingly seeking AI experts, and the competition for LawTech talent has intensified. For instance, some firms are planning to expand their teams of lawyers and developers who work with AI. In fact, Allen & Overy recently introduced a chatbot to assist attorneys in drafting contracts and client memos — and its rivals are following suit.

The shakeup that generative AI is bringing to the legal industry is affecting not only the workplace but also its precursor — legal education. Universities across multiple continents have established initiatives or courses to equip students and professionals with the skills to interact with AI in their practices.

The trust gap: A legal industry polarized about its AI readiness

Thus far, the use of generative AI in the legal industry has been characterized by a patchwork of readiness and trust, as evidenced by a range of conflicting opinions among legal professionals and firms. On the one hand, the legal industry exhibits cautious optimism about generative AI: More than 6 in 10 law firms and corporate legal departments believe the technology will deliver significant business advantages. On the other, 72% of legal professionals are doubtful that the industry is adequately prepared for the looming AI revolution, and, as noted earlier, only 1 in 5 believe the advantages of using AI surpass the disadvantages.

For example, the University of Liverpool offers a module that provides “hands-on experience” with legal tech tools, while the University of Technology, Sydney, has introduced specialized courses that cover topics ranging from governance and regulatory risks of AI use in legal matters to possible failure points of AI. In the United States, the University of Arizona Law School is spearheading a multi-institution initiative to prepare law libraries across the country for the strategic implementation of AI into their operations.

As generative AI marches more deeply into legal territory, the discrepancy between roles at risk and those that require more nuanced human judgment will likely widen. This will necessitate a more systemic shift in legal education and clerking that focuses much less on rote skills and much more on strategy, ethics and other human-centric capabilities. Consequently, the legal firms most likely to pull ahead in this transition may not necessarily be the ones that adopt AI the fastest but those that adapt most holistically to this emerging ecosystem.

Partly shaping this cautious outlook are concerns about the trustworthiness and reliability of generative AI in a legal context. More than half of legal professionals are uncertain about the technology’s reliability, and nearly 2 in 5 do not trust it. Consumers of legal services are not entirely won over either, with 55% of clients and potential clients expressing serious concerns about the use of AI within the legal profession.

In the near term, disparities in perceptions about trust and readiness may create a segmented legal services market in which AI adoption varies significantly depending on the size of the firm and the specific legal tasks involved. As the industry becomes more accustomed to what AI can and cannot do, these disparities are likely to converge toward a more uniform framework of AI adoption.

Ethical roadblocks: Security and privacy concerns surround AI usage in the practice law

Although generative AI promises to unleash unprecedented efficiency gains for the legal sector, it also raises complex questions about security, privacy and ethics that cannot be ignored. The industry’s initial outlook toward these aspects has been one of both caution and concern.

Many in the legal industry are wary of using generative AI, particularly consumer-facing AI technologies such as ChatGPT. More than 60% of legal professionals do not currently use the technology in their practice, citing security and privacy concerns. Their reluctance stems primarily from unresolved questions of how AI technologies handle privacy and ensure client confidentiality — a view disproportionately held by firms’ partners and managing partners. In response to these concerns, some law firms have already adopted internal measures, including warnings and outright bans against unauthorized use of generative AI in their legal work.

The industry’s cautious stance toward the adoption of generative AI technologies is in large measure a manifestation of ethical and operational concerns pertaining to the use of the current iterations of this tech. If generative AI technologies become more robust and sophisticated, will the legal industry’s caution evolve into acceptance? Or will ethical and security concerns be amplified, creating even stronger barriers to adoption? Regulation and guidelines will be instrumental in answering these questions.