Gradient Background

Your expertise is the point

What labor economics actually says about AI and permitting jobs

Safouen Rabah

Founder and CEO at Govstream.ai

Posted on

Table of contents

  1. What labor economics actually says about AI and permitting jobs

  2. The question the headlines don't ask

  3. Why the final step is the most important one

  4. What matters is where the automation hits

  5. The expertise that remains becomes more valuable

  6. What we're building, and how we measure ourselves

A few months ago, we were in a meeting with our customer team in Louisville, Kentucky. The conversation had been going well, the kind where a team is genuinely engaged, asking sharp questions, mapping new tools to their specific workflows. As we were discussing the current permit application intake process, one of the participants said something that quietly stayed with me.

"I have two degrees in urban planning. I didn't go into this profession so I could check the formatting of PDF files in an application over and over."

She wasn't being dramatic. She was being precise. Urban planners, city engineers, reviewers/plan examiners, permit technicians: these are people who chose a profession because they care about the built environment. They have expertise in codes, zoning overlays, community context, and structural safety that takes years to develop. And for too many of them, too much of any given week is spent on work that has nothing to do with that expertise: verifying that documents are present, checking that a scale bar exists, answering the same question for the fortieth time that month.

The AI headlines don't make any of this easier. Every few weeks, another story runs about which white-collar jobs are next in line for automation. Permitting and planning have started to appear on those lists. And when your employer is evaluating AI-powered tools that can handle intake checks and answer applicant questions automatically, it is not unreasonable to wonder what that means for you.

It is a fair question. It deserves a real answer: not reassurance, and not spin, but evidence. Over the past year, three serious labor economics papers have examined how AI automation actually affects the workers whose jobs it touches. When you apply their frameworks to what permit reviewers, planners, and permit techs actually do, a clear and consistent picture emerges. The headlines are missing something important.

A few months ago, we were in a meeting with our customer team in Louisville, Kentucky. The conversation had been going well, the kind where a team is genuinely engaged, asking sharp questions, mapping new tools to their specific workflows. As we were discussing the current permit application intake process, one of the participants said something that quietly stayed with me.

"I have two degrees in urban planning. I didn't go into this profession so I could check the formatting of PDF files in an application over and over."

She wasn't being dramatic. She was being precise. Urban planners, city engineers, reviewers/plan examiners, permit technicians: these are people who chose a profession because they care about the built environment. They have expertise in codes, zoning overlays, community context, and structural safety that takes years to develop. And for too many of them, too much of any given week is spent on work that has nothing to do with that expertise: verifying that documents are present, checking that a scale bar exists, answering the same question for the fortieth time that month.

The AI headlines don't make any of this easier. Every few weeks, another story runs about which white-collar jobs are next in line for automation. Permitting and planning have started to appear on those lists. And when your employer is evaluating AI-powered tools that can handle intake checks and answer applicant questions automatically, it is not unreasonable to wonder what that means for you.

It is a fair question. It deserves a real answer: not reassurance, and not spin, but evidence. Over the past year, three serious labor economics papers have examined how AI automation actually affects the workers whose jobs it touches. When you apply their frameworks to what permit reviewers, planners, and permit techs actually do, a clear and consistent picture emerges. The headlines are missing something important.

The question the headlines don't ask

The standard framing of AI and jobs treats "automation exposure" as a single variable: how much of your job can a machine do? The higher the number, the more at risk you are. This logic produces alarming headlines. It also produces, according to the economists who have studied it most carefully, systematically misleading predictions.

Three papers published in 2025 and 2026, from MIT, Yale, Northwestern, and the University of Toronto, each identify a different reason why. Together, they build a more precise and more reassuring picture of what happens to workers in jobs where AI takes over specific tasks.

The question that matters is not how much of the job can be automated. It is: which part?

The question the headlines don't ask

The standard framing of AI and jobs treats "automation exposure" as a single variable: how much of your job can a machine do? The higher the number, the more at risk you are. This logic produces alarming headlines. It also produces, according to the economists who have studied it most carefully, systematically misleading predictions.

Three papers published in 2025 and 2026, from MIT, Yale, Northwestern, and the University of Toronto, each identify a different reason why. Together, they build a more precise and more reassuring picture of what happens to workers in jobs where AI takes over specific tasks.

The question that matters is not how much of the job can be automated. It is: which part?

Why the final step is the most important one

Joshua Gans and Avi Goldfarb at the University of Toronto published O-Ring Automation through the National Bureau of Economic Research in January 2026. Their starting point is an idea borrowed from the 1986 Challenger disaster.

The space shuttle Challenger was destroyed not by any failure of its major systems, but by a single rubber O-ring seal on one of its solid rocket boosters, a component so inexpensive it barely registered in the mission budget. Economist Michael Kremer, writing in 1993, drew a precise lesson from this: in production chains where every step is essential to the final output, the value of the whole depends on the reliability of every link. A single failure, however small, can destroy the value of everything upstream.


Gans and Goldfarb apply this framework to AI automation, and their finding inverts what the headline version of the story would predict. In production chains where tasks are quality complements, where the quality of each step multiplies the value of every other, automating some tasks does not necessarily reduce what workers earn. It can actually raise it, because the automation concentrates value onto the steps that remain human. As their paper states: "labour income can rise under partial automation because automation scales the value of remaining bottleneck tasks."

This leads them to a conclusion with direct implications for how we should evaluate displacement risk: "The relevant object is not average task exposure but the structure of bottlenecks and how automation reshapes worker time around them."

For permit reviewers, the analysis is clarifying. A building permit is not a collection of independent tasks. It is a chain whose final link cannot be delegated to an algorithm: the professional authorization, the life-safety sign-off, the legal stamp that makes the project buildable. A municipality cannot transfer the liability of a structural or fire-safety determination to a machine. A licensed professional must review the file, exercise judgment, and take professional and legal responsibility for what gets built.

That step is the O-ring. AI can process upstream tasks with speed and accuracy. Until it can accept legal and professional accountability for the final authorization, the reviewer is not optional: not by policy preference, but by structural necessity. And according to Gans and Goldfarb, that indispensability makes the reviewer more valuable as the automation around them improves, not less.

Why the final step is the most important one

Joshua Gans and Avi Goldfarb at the University of Toronto published O-Ring Automation through the National Bureau of Economic Research in January 2026. Their starting point is an idea borrowed from the 1986 Challenger disaster.

The space shuttle Challenger was destroyed not by any failure of its major systems, but by a single rubber O-ring seal on one of its solid rocket boosters, a component so inexpensive it barely registered in the mission budget. Economist Michael Kremer, writing in 1993, drew a precise lesson from this: in production chains where every step is essential to the final output, the value of the whole depends on the reliability of every link. A single failure, however small, can destroy the value of everything upstream.


Gans and Goldfarb apply this framework to AI automation, and their finding inverts what the headline version of the story would predict. In production chains where tasks are quality complements, where the quality of each step multiplies the value of every other, automating some tasks does not necessarily reduce what workers earn. It can actually raise it, because the automation concentrates value onto the steps that remain human. As their paper states: "labour income can rise under partial automation because automation scales the value of remaining bottleneck tasks."

This leads them to a conclusion with direct implications for how we should evaluate displacement risk: "The relevant object is not average task exposure but the structure of bottlenecks and how automation reshapes worker time around them."

For permit reviewers, the analysis is clarifying. A building permit is not a collection of independent tasks. It is a chain whose final link cannot be delegated to an algorithm: the professional authorization, the life-safety sign-off, the legal stamp that makes the project buildable. A municipality cannot transfer the liability of a structural or fire-safety determination to a machine. A licensed professional must review the file, exercise judgment, and take professional and legal responsibility for what gets built.

That step is the O-ring. AI can process upstream tasks with speed and accuracy. Until it can accept legal and professional accountability for the final authorization, the reviewer is not optional: not by policy preference, but by structural necessity. And according to Gans and Goldfarb, that indispensability makes the reviewer more valuable as the automation around them improves, not less.

What matters is where the automation hits

The second paper approaches the question from a different direction. Menaka Hampole of Yale, Dimitris Papanikolaou and Bryan Seegmiller of Northwestern, and Lawrence Schmidt of MIT published Artificial Intelligence and the Labor Market as an NBER working paper in 2025, with a revised version in September of the same year.

Their contribution is to separate two things the standard exposure framework treats as equivalent. They distinguish between the mean exposure of a job's tasks to AI (how much automation in aggregate) and the concentration of that exposure in a small number of tasks. These two variables have opposite effects on employment.

When automation is distributed thinly across all of a job's tasks, each task becomes slightly easier, and employers can eventually accomplish the same work with fewer people. But when automation is concentrated in one or two specific, high-volume, repetitive tasks, leaving the rest of the job intact, the effect is different. Workers are freed from the most time-consuming parts of their role and can redirect their effort toward everything that remains. The paper describes the mechanism: "an automated expense reporting system allows impacted workers to redistribute effort to unaffected tasks, potentially enhancing their overall productivity."


The net conclusion is one of the most consequential findings in the recent AI-and-labor literature: despite clear substitution at the task level, "overall employment effects are muted by offsetting forces." More specifically, the paper finds that occupations where AI exposure is most concentrated experience higher relative employment growth, not lower.

Now think about the distribution of AI exposure across a permit reviewer's week. A significant share of their time goes to tasks that are highly automatable: verifying that a document is present, confirming that required fields are completed, answering the same five applicant questions that arrive in different language every day, checking whether a site plan has a north arrow, a scale bar, and a licensed engineer's stamp. For many permit techs and reviewers, this is not where 15% of their time goes. It is where the majority of their intake bandwidth goes, week after week.

This is concentrated exposure, not distributed exposure. AI is not chipping away at every part of the job simultaneously. It is targeting the administrative scaffolding that surrounds expert review and clearing it away. What that produces is not a reduced need for reviewers. It is a freed reviewer, able to direct their full capacity toward the backlog that every city with published permit data is struggling to clear.

The demand for skilled review is not shrinking. It is an overwhelming current supply. Concentrated automation removes the bottleneck. The reviewers are still needed. Now they can actually catch up.

What matters is where the automation hits

The second paper approaches the question from a different direction. Menaka Hampole of Yale, Dimitris Papanikolaou and Bryan Seegmiller of Northwestern, and Lawrence Schmidt of MIT published Artificial Intelligence and the Labor Market as an NBER working paper in 2025, with a revised version in September of the same year.

Their contribution is to separate two things the standard exposure framework treats as equivalent. They distinguish between the mean exposure of a job's tasks to AI (how much automation in aggregate) and the concentration of that exposure in a small number of tasks. These two variables have opposite effects on employment.

When automation is distributed thinly across all of a job's tasks, each task becomes slightly easier, and employers can eventually accomplish the same work with fewer people. But when automation is concentrated in one or two specific, high-volume, repetitive tasks, leaving the rest of the job intact, the effect is different. Workers are freed from the most time-consuming parts of their role and can redirect their effort toward everything that remains. The paper describes the mechanism: "an automated expense reporting system allows impacted workers to redistribute effort to unaffected tasks, potentially enhancing their overall productivity."


The net conclusion is one of the most consequential findings in the recent AI-and-labor literature: despite clear substitution at the task level, "overall employment effects are muted by offsetting forces." More specifically, the paper finds that occupations where AI exposure is most concentrated experience higher relative employment growth, not lower.

Now think about the distribution of AI exposure across a permit reviewer's week. A significant share of their time goes to tasks that are highly automatable: verifying that a document is present, confirming that required fields are completed, answering the same five applicant questions that arrive in different language every day, checking whether a site plan has a north arrow, a scale bar, and a licensed engineer's stamp. For many permit techs and reviewers, this is not where 15% of their time goes. It is where the majority of their intake bandwidth goes, week after week.

This is concentrated exposure, not distributed exposure. AI is not chipping away at every part of the job simultaneously. It is targeting the administrative scaffolding that surrounds expert review and clearing it away. What that produces is not a reduced need for reviewers. It is a freed reviewer, able to direct their full capacity toward the backlog that every city with published permit data is struggling to clear.

The demand for skilled review is not shrinking. It is an overwhelming current supply. Concentrated automation removes the bottleneck. The reviewers are still needed. Now they can actually catch up.

The expertise that remains becomes more valuable

The third paper comes from David Autor and Neil Thompson at MIT, published through the MIT Shaping the Future of Work Initiative in June 2025. Their framework introduces a concept that most automation analysis overlooks: not all tasks in a job require the same level of expertise. When automation removes tasks from a job, what matters is the expertise level of the tasks that remain.

The central question of their paper Expertise is this: when automation eliminates tasks from an occupation, does it raise or lower the expertise required for the work that is left? Their answer, validated against four decades of occupational labor market data, is that it depends entirely on which tasks are removed.

Their core finding is precise enough to quote directly: "automation has raised wages and reduced employment in occupations where it eliminated inexpert tasks, but lowered wages and increased employment in occupations where it eliminated expert tasks." The implication: if AI removes the expert work, the job gets commoditized and wages fall. If AI removes the supporting work (the routine, lower-barrier tasks surrounding the expertise), what remains is more expert, not less, and wages rise accordingly.


Autor and Thompson illustrate this with two occupations transformed by computerization over the past forty years: accounting clerks and inventory clerks. Both had significant automation exposure. Both had routine, codifiable tasks removed by computers. But the work that remained was very different.

For accounting clerks, what was left after automation was complex problem-solving, financial analysis, and professional judgment: tasks requiring more expertise, not less. Their wages rose and the role became harder to enter. For inventory clerks, what remained after automation was largely physical and routine: counting, stocking, verifying against a list. Their wages fell and the role opened to workers with lower qualifications.

The same level of automation exposure. Two completely different outcomes. The difference was not how much the technology automated — it was what was left over.

For permit reviewers, the answer to that question is unambiguous.

Strip away the intake triage, the completeness checks, the repetitive applicant Q&A, and the arithmetic of basic code compliance. What remains? Variance review that requires judgment about community context and legal precedent. Complex structural assessments where the code provides a framework but not an answer. Projects that trigger overlapping requirements: historic district guidelines, critical area overlays, ADU eligibility, stormwater thresholds, each of which involves professional discretion rather than pattern matching. Developer negotiation. Final professional authorization for projects that people will live and work in.

These are the tasks that make a permit reviewer a permit reviewer: the work for which they earned their degrees and their licenses. Automation in permitting does not commoditize this role. It clears the administrative surrounding that was obscuring it.

The expertise that remains becomes more valuable

The third paper comes from David Autor and Neil Thompson at MIT, published through the MIT Shaping the Future of Work Initiative in June 2025. Their framework introduces a concept that most automation analysis overlooks: not all tasks in a job require the same level of expertise. When automation removes tasks from a job, what matters is the expertise level of the tasks that remain.

The central question of their paper Expertise is this: when automation eliminates tasks from an occupation, does it raise or lower the expertise required for the work that is left? Their answer, validated against four decades of occupational labor market data, is that it depends entirely on which tasks are removed.

Their core finding is precise enough to quote directly: "automation has raised wages and reduced employment in occupations where it eliminated inexpert tasks, but lowered wages and increased employment in occupations where it eliminated expert tasks." The implication: if AI removes the expert work, the job gets commoditized and wages fall. If AI removes the supporting work (the routine, lower-barrier tasks surrounding the expertise), what remains is more expert, not less, and wages rise accordingly.


Autor and Thompson illustrate this with two occupations transformed by computerization over the past forty years: accounting clerks and inventory clerks. Both had significant automation exposure. Both had routine, codifiable tasks removed by computers. But the work that remained was very different.

For accounting clerks, what was left after automation was complex problem-solving, financial analysis, and professional judgment: tasks requiring more expertise, not less. Their wages rose and the role became harder to enter. For inventory clerks, what remained after automation was largely physical and routine: counting, stocking, verifying against a list. Their wages fell and the role opened to workers with lower qualifications.

The same level of automation exposure. Two completely different outcomes. The difference was not how much the technology automated — it was what was left over.

For permit reviewers, the answer to that question is unambiguous.

Strip away the intake triage, the completeness checks, the repetitive applicant Q&A, and the arithmetic of basic code compliance. What remains? Variance review that requires judgment about community context and legal precedent. Complex structural assessments where the code provides a framework but not an answer. Projects that trigger overlapping requirements: historic district guidelines, critical area overlays, ADU eligibility, stormwater thresholds, each of which involves professional discretion rather than pattern matching. Developer negotiation. Final professional authorization for projects that people will live and work in.

These are the tasks that make a permit reviewer a permit reviewer: the work for which they earned their degrees and their licenses. Automation in permitting does not commoditize this role. It clears the administrative surrounding that was obscuring it.

What we're building, and how we measure ourselves

We are not neutral observers in this conversation. Govstream.ai builds AI assistants for permitting departments, and our products do exactly what the research describes as the protective pattern of automation: they target the concentrated, repetitive work (intake screening, completeness checks, applicant Q&A) and remove it from the reviewer's plate.

Permit Guide handles the inbound questions that consume counter staff time: the same questions, asked differently by different applicants, dozens of times a week. Staff see the draft answer and send it. It doesn't go out without them. The Application Assistant screens submissions for completeness before they enter the review queue, so reviewers open organized, complete files rather than chaotic first submissions. First Review helps prioritize the queue, identifying which files warrant close attention and which are straightforward, so reviewers can direct their expertise to what actually requires it.

Every one of these products is designed to hand the reviewer a decision-ready situation, not to make the decision for them. The reviewer decides. The system prepares.

We are willing to be held accountable to specific, measurable outcomes: time to first decision, resubmission rate, inbound inquiry volume. These are throughput metrics, not headcount metrics. Our goal is to help the same team serve more of their community — not to reduce the size of that team.

The constraint in permitting is not that cities have too many reviewers. It is that too much of every reviewer's day is spent on work that was never the point of the job.

What we're building, and how we measure ourselves

We are not neutral observers in this conversation. Govstream.ai builds AI assistants for permitting departments, and our products do exactly what the research describes as the protective pattern of automation: they target the concentrated, repetitive work (intake screening, completeness checks, applicant Q&A) and remove it from the reviewer's plate.

Permit Guide handles the inbound questions that consume counter staff time: the same questions, asked differently by different applicants, dozens of times a week. Staff see the draft answer and send it. It doesn't go out without them. The Application Assistant screens submissions for completeness before they enter the review queue, so reviewers open organized, complete files rather than chaotic first submissions. First Review helps prioritize the queue, identifying which files warrant close attention and which are straightforward, so reviewers can direct their expertise to what actually requires it.

Every one of these products is designed to hand the reviewer a decision-ready situation, not to make the decision for them. The reviewer decides. The system prepares.

We are willing to be held accountable to specific, measurable outcomes: time to first decision, resubmission rate, inbound inquiry volume. These are throughput metrics, not headcount metrics. Our goal is to help the same team serve more of their community — not to reduce the size of that team.

The constraint in permitting is not that cities have too many reviewers. It is that too much of every reviewer's day is spent on work that was never the point of the job.

The profession this could be

The planner in Louisville who made the comment about counting PDF files never stopped being an expert in the built environment. She simply had very little time to act like one.

The research says her expertise is not being made redundant by AI. It is being made legible. When the administrative scaffolding falls away, what remains is the judgment, the professional knowledge, the community stewardship: the craft. And according to the most rigorous labor economics of this moment, that craft becomes more valuable when the supporting work is removed, not less.

Cities are not running out of development to review. They are running out of capacity to review it at the pace their communities need. The growing backlog is not evidence that permit reviewers are unnecessary. It is evidence that the work surrounding their expertise has consumed the capacity needed to do the actual job.


That is the problem AI should solve in permitting. Not replacing the professional. Clearing the path to the work that makes the profession so essential in the first place.

The housing is waiting. And so is a better version of a career that too many talented, credentialed, community-minded people are spending counting PDF files.

The profession this could be

The planner in Louisville who made the comment about counting PDF files never stopped being an expert in the built environment. She simply had very little time to act like one.

The research says her expertise is not being made redundant by AI. It is being made legible. When the administrative scaffolding falls away, what remains is the judgment, the professional knowledge, the community stewardship: the craft. And according to the most rigorous labor economics of this moment, that craft becomes more valuable when the supporting work is removed, not less.

Cities are not running out of development to review. They are running out of capacity to review it at the pace their communities need. The growing backlog is not evidence that permit reviewers are unnecessary. It is evidence that the work surrounding their expertise has consumed the capacity needed to do the actual job.


That is the problem AI should solve in permitting. Not replacing the professional. Clearing the path to the work that makes the profession so essential in the first place.

The housing is waiting. And so is a better version of a career that too many talented, credentialed, community-minded people are spending counting PDF files.

Sources:

Gans, Joshua S. and Goldfarb, Avi. "O-Ring Automation." NBER Working Paper No. 34639. January 2026. nber.org/papers/w34639

Hampole, Menaka, Papanikolaou, Dimitris, Schmidt, Lawrence D.W., and Seegmiller, Bryan. "Artificial Intelligence and the Labor Market." NBER Working Paper No. 33509. February 2025, revised September 2025. nber.org/papers/w33509

Autor, David and Thompson, Neil. "Expertise." MIT Shaping the Future of Work Initiative. June 2025. shapingwork.mit.edu

________________________________________________________________________

Govstream.ai builds intelligent AI-powered permitting workflows for cities and counties. Our platform, Permit Guide, Application Assistant, and Review Assistant, brings continuous feedback, intelligent routing, and real-time decision support to every stage of the permitting process to support staff and guide builders.

Sources:

Gans, Joshua S. and Goldfarb, Avi. "O-Ring Automation." NBER Working Paper No. 34639. January 2026. nber.org/papers/w34639

Hampole, Menaka, Papanikolaou, Dimitris, Schmidt, Lawrence D.W., and Seegmiller, Bryan. "Artificial Intelligence and the Labor Market." NBER Working Paper No. 33509. February 2025, revised September 2025. nber.org/papers/w33509

Autor, David and Thompson, Neil. "Expertise." MIT Shaping the Future of Work Initiative. June 2025. shapingwork.mit.edu

________________________________________________________________________

Govstream.ai builds intelligent AI-powered permitting workflows for cities and counties. Our platform, Permit Guide, Application Assistant, and Review Assistant, brings continuous feedback, intelligent routing, and real-time decision support to every stage of the permitting process to support staff and guide builders.

Gradient

Modern permitting
for growing cities

Modern permitting for growing cities

Govstream.ai helps cities modernize permitting, improve efficiency, and support sustainable growth.

Govstream.ai helps cities modernize permitting, improve efficiency, and support sustainable growth.

Govstream.ai Dashboard
Govstream.ai Dashboard