Coping with the Co-Pilot
How AI is Changing Knowledge Work
As generative AI “co-pilot” tools become commonplace in offices, knowledge workers are navigating a transformation in how they do their jobs. This is especially true for non-technical professionals like marketers, analysts, consultants, writers, managers, and educators. Below, we explore recent findings (mostly 2024–2025) on how AI is impacting workplace experiences, required skills, organizational practices, and emerging best practices. The picture is nuanced – AI can boost productivity and creativity, but it also introduces new stresses and forces a rethink of roles and workflows.
1. AI’s Impact on Workload, Productivity and Burnout
Productivity Boosts vs. the “AI Overwhelm”: Early research indicates generative AI can dramatically increase productivity for many writing and analytical tasks. For example, a 2023 MIT experiment found that white-collar workers using ChatGPT produced higher-quality work 37% faster on certain tasks. Likewise, a study at BCG saw consultants with access to GPT-4 outperform peers on every measured dimension. In practice, many knowledge workers praise AI for helping handle drudge work: in a Slack survey, 81% of employees who do use AI said it improves their productivity, and they report higher engagement and overall job satisfaction (22% higher) than non-users. Power users even say AI makes their overwhelming workloads more manageable (92% agree) and helps them focus on top priorities.
Yet alongside these gains, there is an AI productivity paradox emerging. A broad 2024 Upwork survey of 2,500 knowledge workers (US, UK, Canada, Australia) found 77% of employees felt AI tools had **actually decreased their productivity and added to their workload – a stark contrast to the 96% of executives who expected AI to increase productivity. One major reason is the hidden “overhead” of using AI: time spent learning ever-updating tools, crafting effective prompts, and double-checking AI outputs for errors. In fact, 81% of executives in that survey acknowledged they had increased demands on workers in the past year, likely assuming AI would help workers handle more. This gap between leadership expectations and employees’ day-to-day reality is creating new stress on staff.
New Forms of Cognitive Load – “Prompt Fatigue” and AI Burnout: Knowledge workers now describe a peculiar kind of exhaustion from working with AI assistants. Constantly prompting, refining, and fact-checking an AI’s output can become a grind. In one anecdote, a Reddit user realized they had “probably written a thousand pages” of prompts in pursuit of better GPT outputs. As one writer quipped, “Forget decision paralysis. We’re now dealing with prompt paralysis” – a sneaky new burnout that “hits you… inside your chat window with an AI”. The term “prompt fatigue” has entered the lexicon to describe this mental wear-and-tear. Unlike traditional burnout, which might stem from too many meetings or emails, prompt fatigue creeps in via the trial-and-error of coaxing AI – the endless rewriting and refining of prompts in search of a good result. It’s the cognitive load of having an AI “co-pilot” always beside you – helpful, but also demanding your guidance at each step.
Worse, when organizations rush to adopt AI across workflows, employees can feel more overwhelmed. Case Study – AI Overload at a PR Agency: Anurag Garg, head of a PR firm, eagerly mandated ChatGPT for everything from brainstorming story ideas to transcribing notes. Instead of speeding up work, it slowed the team down and doubled their time on tasks. Employees had to write detailed prompts and then double-check the AI’s often-inaccurate output, on top of relearning new AI features with each update. “There were too many distractions,” Garg says – tasks began taking twice as long with AI in the mix. Juggling multiple AI tools became “more of a mess,” and keeping up with a flood of new apps left everyone frustrated and burnt out. Garg ultimately reversed the “AI-everywhere” edict. The team scaled back to using AI only for specific research tasks, and “everyone is much happier” and more connected to their work again. This story illustrates that indiscriminate use of AI can backfire – without clear purpose and manageable scope, co-pilots risk increasing cognitive load instead of relieving it.
Rising Stress and “AI Fatigue”: Surveys confirm that many knowledge workers feel anxious and overloaded by the rapid introduction of AI. A late 2024 Wiley survey of North American employees found 96% were experiencing stress around changes at work, and 40% were specifically “struggling to understand how to integrate AI into their work.” Fully 75% admitted they lack confidence in how to effectively use AI tools. Their managers aren’t much more prepared – only 34% of people managers felt equipped to support their team in adopting AI. This lack of clarity feeds a sense of overwhelm. In Slack’s global poll, two-thirds of desk workers had never tried AI at work as of late 2024, and nearly 2 in 5 said their company had no guidelines for AI use – reflecting uncertainty and caution among employees.
Even those enthusiastically using AI worry about long-term burnout. In one survey, 61% of workers believed that using AI at work will increase their chances of burnout, a figure that spiked to 87% for workers under 25. And while AI might save time on some tasks, employees fear it could erode work-life balance – 43% said they feel AI will negatively affect their work–life balance. This is echoed by findings on “AI technostress.” An academic study using the Job Demands–Resources model found that AI-related technostress (e.g. dealing with new AI systems and fears of job replacement) increases exhaustion, exacerbates work–family conflict, and lowers job satisfaction, even though the AI tools can also contribute to higher productivity. In short, AI’s dual impact is clear – it can both lighten workloads and create new pressures. Workers get relief from drudgery, but face mental fatigue from supervising digital assistants and keeping up with “yet another tool.”
2. Shifting Skill Values and Professional Identity in the AI Era
The rise of AI co-pilots is reshaping what skills are in demand and how professionals perceive their own value. Traditional “soft” skills like writing from scratch, meticulous data analysis, or flawless attention to detail are still important – but increasingly, the ability to work effectively with AI is seen as the must-have skill. In Microsoft’s 2024 Work Trend Index survey of 31,000 people, 71% of business leaders said they would hire a less-experienced candidate with AI skills over a more-experienced candidate without them. Furthermore, 66% of leaders flatly say they would not hire someone lacking AI skills. In the same vein, 77% of leaders predict that, because junior employees can now delegate work to AI, even entry-level staff will be given greater responsibilities earlier in their careers. In other words, knowing how to leverage AI has become a great equalizer – and possibly more valuable than years of experience in certain tasks. This marks a major shift in how “talent” is evaluated. As one organizational psychologist put it, companies are beginning to “renegotiate the ‘operational contract’ – the how of work – with their employees as AI puts more power into the hands of workers in terms of the way the job gets done.”
From Content Creator to AI Editor: Many knowledge workers are finding their day-to-day role evolving from content creation to content curation. Instead of writing the first draft, they’re editing AI-generated drafts. Instead of manually analyzing data, they’re verifying AI-generated insights. Skills like prompt design, critical judgment, and quality assurance are now at a premium. In fact, “prompt engineering” is often touted as “the new Excel” – a baseline digital skill everyone will need, akin to knowing spreadsheets. We also see professionals actively upskilling to stay relevant: 76% of workers say they need new AI skills to remain competitive in their field. Non-technical roles are embracing this – LinkedIn learning courses on AI saw a 160% usage spike among job titles like project managers, architects, and administrative assistants in the last year. And on LinkedIn profiles, there was a 142× increase in members adding AI skills (like “ChatGPT” proficiency) to their resumes, with writers, designers, and marketers leading the pack. Clearly, many are eager to pair domain expertise with AI know-how.
Erosion of Confidence and Meaning: On the flip side, when AI starts taking over core parts of one’s job, some workers report a loss of confidence or sense of purpose. If an AI can produce in seconds what you trained for years to do – write a strategy memo, draft a marketing plan, analyze a dataset – it’s natural to feel unsettled. In the Wiley survey above, three in four employees admitted lacking confidence in how to use AI, highlighting a fear of inadequacy in this new AI-infused workplace. Beyond just tool proficiency, there’s a deeper identity shift: knowledge workers who prided themselves on thoughtful writing or creative ideation must now see those skills augmented (and sometimes outshined) by machine outputs. “You could feel stressed about having ended up in an environment of high volume and low control, when what you originally wanted to do was interact personally with clients and make a difference,” says Leah Steele, a former lawyer turned burnout coach. She observes many professionals (in law and other fields) feeling disillusioned when technology changes the nature of their work. For instance, junior lawyers expected to use AI tools to review documents might find themselves doing less of the client-facing advisory work that attracted them to the profession. Steele notes that burnout now is “not just about the volume of work… but how we feel about the work and what we’re getting from it.” If AI-driven processes strip away the rewarding aspects of the job, employees can lose motivation.
There is also the psychological effect of knowing one’s role might be diminished by AI. The fear of replacement looms large and can erode confidence. Steele points out that people are “stressed about the risk of losing [their] job, and the fear of being replaced because [they’re] no longer enjoying the work as it’s become so tech driven.” This highlights a vicious cycle: as AI takes over more tasks, a worker’s enjoyment and engagement can decline, which in turn makes them worry about being outperformed or replaced – further undermining their professional identity.
Soft Skills Still Matter – Perhaps More Than Ever: Despite these anxieties, experts emphasize that human judgment and soft skills are still critical, even in an AI-rich workflow. AI might generate content, but humans ensure it’s contextually appropriate, ethically sound, and resonates emotionally with other humans. Skills like critical thinking, domain expertise, communication, and empathy become the differentiators that AI can’t easily replicate. In fact, one notable finding is that generative AI tends to enhance jobs rather than fully automate them. A global analysis by McKinsey (2023) projects that while at least 50% of today’s work activities could be transformed by AI, only about 5% of jobs are truly at risk of full automation – most roles will be augmented, not replaced. Many knowledge workers are already discovering that partnering with AI frees them to apply higher-order skills. For example, a marketer might spend less time churning out draft copy and more time on creative strategy and campaign ideas. A teacher might use AI to grade routine quizzes, giving them more time to mentor students. Rather than making human skills obsolete, AI is raising the bar for what human contribution means: it puts a premium on oversight, ethical judgment, cultural nuance, and the “last mile” of polishing and implementing ideas.
3. Organizational Dynamics: Teams, Training, and Policies in the Co-Pilot Age
The introduction of AI co-pilots is not just an individual shift – it’s causing ripple effects in team collaboration, management strategy, and company policy. Organizations are grappling with how to harness employees’ enthusiasm for AI while mitigating risks and avoiding chaos.
Bottom-Up Adoption vs. Top-Down Uncertainty: One striking trend is that employees aren’t waiting for permission – they’re bringing AI tools into their workflows to cope with heavy workloads. Microsoft’s 2024 Work Trend Index reported that a full 75% of global knowledge workers are now using generative AI at work, often adopting tools on their own to help handle the “digital debt” and information overload they face. This grassroots adoption reflects real pain points: people feel “overwhelmed with digital debt and under duress at work – and they are turning to AI for relief.” At the same time, many business leaders concede they “lack a plan and vision” for how to strategically implement AI across the organization. We’re essentially in a phase where individual experimentation with AI is outpacing corporate governance. Slack’s Workforce Index found executive urgency to use AI had skyrocketed (7× increase) and become a top-of-mind concern, yet two-thirds of workers hadn’t tried AI and nearly 40% said their company had no AI usage policy in place. This mismatch can breed tension: some employees forge ahead using ChatGPT or similar tools in silo, while others hold back out of uncertainty or even ethical concerns.
Call for Guidance and Training: Clear organizational guidance is sorely needed – and employees themselves are asking for it. According to a 2024 global survey by Veritas, 77% of office workers want their employer to provide guidelines, policies or training on proper generative AI usebenefitspro.com. Without such guidance, confusion and resentment can set in. Notably, nearly a quarter (23%) of workers in that survey don’t use AI at all and even feel that coworkers who do use it “should be penalized.” This hints at a cultural divide: in the absence of policy, some staff view AI as cheating or risky, while others dive in – a recipe for conflict. Employers that fail to set boundaries also face practical risks. Roughly 31% of office workers admitted to inputting sensitive data (customer details, financials, etc.) into AI tools, and the majority were unaware of the data privacy risks (61% didn’t realize they might be leaking confidential info). Without an AI usage policy, companies could blunder into compliance violations or security breaches.
Some organizations are rising to this challenge. Forward-thinking companies have begun issuing internal AI usage policies and launching training initiatives. For example, in mid-2024 PwC announced it would roll out ChatGPT Enterprise access to 100,000 employees, paired with extensive AI upskilling programs to ensure staff use these tools responsibly and effectively. Many firms are incorporating AI guidelines into their codes of conduct – covering issues like when employees should (or shouldn’t) use AI, how to verify AI outputs, and what data is off-limits for AI inputs. The U.S. Department of Labor even released best practices urging employers to train workers on AI and to protect them from unfair outcomes of AI implementations. The message is clear: leaving employees to figure out AI on their own is not a sustainable strategy. In the Wiley Workplace Intelligence survey, employees said the top things that would make them more comfortable with workplace AI were company-provided training (61%), a clear organizational AI strategy (54%), and explicit expectations for AI use in their role (48%). Workers are essentially pleading: “Tell us how and when to use these tools – don’t just throw them at us.”
Teamwork and Workflow Changes: Integrating AI co-pilots also means rethinking workflows and collaboration. Teams that find success often start by identifying specific pain points that AI can address rather than blanket-adopting AI for everything. The Everest PR case above is instructive – they learned that AI worked best as a research assistant, not as a writer for client pitches. Similarly, other teams have discovered new hybrid workflows: a human might outline a piece, an AI drafts it, then another human reviews and refines. Or an analyst generates initial insights with AI, then the team discusses and validates before deciding. These “human-AI collaboration loops” require clarity about roles: when is the AI the first draft producer vs. when is it just an informant? Who has final sign-off on AI-assisted work? Many companies are embedding a “human in the loop” requirement, mandating that employees must review AI-generated content (for accuracy, tone, compliance) before it goes out. Performance evaluations may also need updating – to credit the productivity boost from AI augmentation, but also to ensure employees remain accountable for the outputs. Some organizations have begun explicitly evaluating how well employees use AI tools: for instance, considering “AI aptitude” or efficiency gains in performance reviews. A “hidden talent shortage” is emerging for these skills – tech-savvy non-engineers are in demand. As noted earlier, 66% of leaders say they won’t hire new people without AI skills, and even within companies, those who quickly adopt AI may get fast-tracked. In LinkedIn’s data, 69% of people believe mastering AI tools can help get them promoted faster, and 79% say it will broaden their career opportunities. Middle managers in particular are in a bind: they must learn to supervise teams that include AI agents, set realistic expectations, and prevent both under- and over-reliance on AI among their reports.
Case Study – Guardrails in Action: Consider the experience of a global law firm that introduced an AI research tool for its associates (hypothetical composite based on industry reports). Initially, some associates used the tool to draft client memos, which led to a few embarrassing mistakes in factual content. The firm responded by updating its workflow: AI could be used to summarize case law or generate outlines, but human lawyers had to compose the final advice, with a peer review step to double-check any AI-derived material. They also hosted training sessions highlighting AI’s known pitfalls (like fabricated citations). After these adjustments, the associates found the AI truly helpful – it saved them hours on tedious literature reviews, yet final outputs remained polished by human expertise. This kind of structured approach is becoming common: employees get the green light to use AI, but within a clear framework that defines where the “hand-offs” between AI and human occur.
In contrast, companies that fail to set such norms may struggle. We’ve seen cases where employees feel pushed to use AI without guidance and end up, as one person described, “feeling burdened by increased workload demands after introducing AI-based productivity tools.” In Leah Steele’s work with burned-out lawyers, she notes that firms rolled out new tech that massively increased the volume of work (one person’s caseload jumped from 50 to 250 clients after an AI tool came in) without recalibrating expectations or processes. The lesson: AI adoption must be accompanied by workflow redesign and managerial support, or else it simply accelerates work without alleviating pressure.
4. Emerging Best Practices for Thriving with AI Co-Pilots
As the dust begins to settle on the initial AI upheaval, leading individuals and organizations are developing playbooks for sustainable AI integration. Several best practices and norms are coming into focus:
Start Small, Target High-Impact Uses: Rather than deploying AI broadly with vague goals, successful teams begin with one or two well-defined use cases. For example, a marketing team might first use AI only to draft social media posts, where it reliably saves time, before expanding it to more complex content. Experts suggest an incremental approach: “Start with one AI tool for a specific, well-defined task… Build a reliable validation process before expanding AI use to more complex tasks.” This prevents the chaos of too many tools and allows workflows to adapt around the AI gradually.
Develop Prompt Libraries & QA Checklists: Experienced AI users often maintain a prompt library – a shared repository of effective prompts and queries for common tasks – so that everyone on the team can benefit from what works (and avoid what doesn’t). Alongside this, teams are instituting quality-check checklists. For instance, if AI is used to generate a report or analysis, the human owner must check facts, ensure no confidential data leaked into the prompt, verify the tone/brand compliance, etc., before marking the task complete. These checklists help counteract blind trust in AI and mitigate errors. It’s telling that in Slack’s survey, a mere 7% of desk workers said AI outputs are completely trustworthy – human oversight is still widely deemed essential. Even the most AI-forward companies reinforce that an employee should always remain the final editor of client-facing work.
Balance Automation with Human Value-Add: The mantra “don’t just use AI for the sake of AI” is becoming common. Best-in-class adopters align AI usage with its strengths – and know when to leave work to humans. Repetitive, low-value tasks (data entry, formatting, transcribing meetings) are ripe for AI automation, freeing up humans for more creative and strategic work. A recent Atlassian report on “AI collaboration” found that the most advanced AI adopters were able to “say goodbye – forever – to the tasks that drain energy and steal time,” and as a result they felt more energized and motivated, with higher job satisfaction and lower burnout levels. But for high-stakes or highly creative tasks, human judgment takes the lead. One emerging norm is the “80/20 rule” for AI: if an AI can get you 80% of the result with 20% of the effort (e.g. a rough draft or a list of insights), that’s a win – but the last 20% of polish and decision-making is where you, the human, add critical value. Workers and managers are learning which parts of a project to delegate to AI and which to keep under manual control. As one manager put it, “The goal isn’t to use AI everywhere, but to use it where it truly adds value.”.
Invest in Skills and Change Management: Organizations at the forefront treat AI adoption as a change-management challenge, not just a tech rollout. That means training employees (and especially managers) in how to use AI tools efficiently, safely, and ethically. It also means addressing the human side of the transition – acknowledging employees’ anxieties and establishing an open dialogue about how roles will evolve. A World Economic Forum report (2025) noted that companies seeing the best results from generative AI are those that paired technology deployment with upskilling initiatives and a culture of continuous learningpwc.comunleash.ai. On an individual level, being a “lifelong learner” is more important than ever. Many knowledge workers are proactively taking online courses on prompt engineering, data literacy, and AI ethics to keep their skills sharp. This not only boosts their effectiveness with co-pilot tools but also their confidence and career resilience.
Maintain a Human-Centered Ethos: Finally, even as AI takes on a bigger role, leading organizations insist on keeping people at the center. This means regularly asking: Is the AI actually helping our employees and customers, or just creating new headaches? It also means giving employees a voice in how AI is implemented. When workers feel involved in the AI adoption process, they are more likely to trust and embrace the tools. Transparency is key – explaining which decisions the AI will assist with, how its suggestions are evaluated, and being clear that using AI is meant to augment human work, not surveil or judge it. In practice, some companies have formed internal “AI councils” or pilot user groups to gather feedback from employees on what works and what doesn’t. The end goal is to develop a shared understanding that AI is a tool in service of human goals. As organizational psychologist Constance Noonan Hadley reminds, after the pandemic and generational shifts forced companies to rethink the “why” of work, now “companies must renegotiate… the how of work” in an AI-driven workplace. That renegotiation works best when it’s collaborative – when employers and employees together redefine workflows, responsibilities, and expectations in light of AI’s capabilities.
Mini Case – A “Co-Pilot” Code of Conduct: One multinational bank provided an instructive example. When rolling out an AI assistant for its research analysts, management issued a simple “Co-Pilot Code of Conduct” to all staff. It included points like: 1) The AI is here to save you time on routine tasks (scheduling, first drafts, information lookup), not to make judgment calls – always apply your expertise to the AI’s outputs. 2) You are responsible for any content you deliver, whether AI helped produce it or not. 3) Do not input client-identifying or regulatory data into the AI. 4) Share successful use cases and prompt techniques with the team so everyone can benefit. 5) If the AI isn’t helping or introduces errors, adjust or feel free to not use it for that task.* By setting these expectations upfront, the bank found much smoother adoption. Analysts weren’t confused about what they could or couldn’t do, and they felt safe experimenting within those guardrails. Over time, the best practices from employees’ experiments (like an approved prompt library for common report types) were codified into official process. This case underlines how thoughtful policies and knowledge-sharing can turn AI from a source of friction into a true “co-pilot” that the whole team trusts.
In summary, AI is undeniably changing knowledge work – how work gets done, who does it, and even how workers feel about their jobs. Many knowledge workers are finding relief in offloading drudgery to AI and are achieving new levels of efficiency and creativity. At the same time, the rush of AI into daily workflows has brought new strains: “prompt fatigue,” information overload from too many tools, and crises of confidence about one’s role. The value of certain skills is shifting – prompting and AI literacy are rising in importance – but human judgment, creativity, and interpersonal skills remain essential and arguably become more defining of one’s value. Organizations that navigate this transition best are those that pair AI adoption with clarity, training, and empathy: providing the structures for people to co-work with AI rather than struggle in confusion or competition with it. As we move beyond the hype, the most successful knowledge workers and teams will be those treating AI as neither a magic solution nor a threat, but as a powerful new partner – one whose contributions and limitations must be well understood. The era of the AI co-pilot is here, and coping with it means learning when to hit the autopilot button and when to firmly take the controls back.
Sources:
Rajendran, A. (2025). Prompt Fatigue Is the New Burnout — and No One’s Talking About It. Medium. – Describes the emerging phenomenon of “prompt fatigue” and burnout from over-reliance on AI prompting.
Microsoft Work Trend Index (May 2024). AI at Work: 2024 Annual Report. – Global survey of 31,000 people on AI adoption and attitudes; includes statistics on AI usage (75% of knowledge workers), hiring preferences (e.g. 66% of leaders won’t hire without AI skills), and workers’ feeling of overwhelm driving them to adopt AI.
Upwork Survey via BBC News (Oct 2024). Will AI make work burnout worse? – Reports results of a 2,500 knowledge worker survey (US, UK, Canada, Australia) where 96% of execs expected productivity gains from AI, but 77% of employees said AI had decreased their productivity and added to workload. Also cites ResumeNow poll on burnout expectations.
Costa, M. (2024). AI at work: Will it contribute to employee burnout? BBC News. – Article with anecdotes (e.g. PR agency case) and expert commentary on AI-related stress. Includes Everest PR team story (AI mandate causing tasks to take twice as long, then reversal) and quotes from burnout coach Leah Steele on loss of meaning and fear of replacement in an AI-driven environment.
Wiley Workplace Intelligence (Sept 2024). The Human Side of AI: 3 Tips for Navigating the AI Era. – Press release of survey of 2,000+ North American workers. Found 75% lack confidence in using AI and 61% want employer-provided training; also only ~34% of managers feel prepared to support AI integration.
Veritas (via BenefitsPro, Feb 2024). Employees are desperate for Gen AI guidelines. – Global survey (~12,000 office workers) highlighting that 77% want AI usage policies/training from employers, 70%+ are already using gen AI, and 31% admit to inputting sensitive data into AI without realizing risksbenefitspro.com.
Slack Workforce Lab (Oct 2024). Workforce Index – Despite AI enthusiasm, workers aren’t yet unlocking its benefits. – Global survey of desk workers. Notable findings: two-thirds haven’t tried AI at work; among users, 81% see productivity improvement and 22% higher satisfaction; 93% don’t fully trust AI outputs; ~40% say their company has no AI guidelines.
Atlassian (HBR Sponsor Content, Apr 2025). AI Can Help Knowledge Workers Fix Five Frustrations. – Reports from a study of 5,000 workers (AI Collaboration Report) that “mature” AI users save 2× more time (105 min/day vs 53) and report higher energy, innovation, on-time delivery (85% vs 51%) and lower burnout. Advocates focusing AI on tedious tasks to boost satisfaction.
Tsai, H. et al. (2024). AI’s dual impact on employees’ work and life well-being (Job Demands–Resources Model study). – Academic study of 600 workers: found generative AI and AI proficiency increase productivity and job satisfaction, but AI-related technostress increases exhaustion, work-family conflict, and lowers satisfaction. Highlights need to mitigate technostress even as we adopt AI.
Hadley, C.N. (2024). Quote in Microsoft Work Trend Index. – “Now companies must renegotiate the ‘operational contract’ – the how of work – with their employees as AI puts more power into the way the job gets done.”. Emphasizes the broad, strategic nature of changes required in workplace practices in the age of AI.
Citations
Employees are desperate for Gen AI Guidelines, organizations missing out
[PDF] Leveraging Generative AI for Job Augmentation and Workforce ...
How PwC opened the aperture of upskilling possibilities with AI ...





