How-To Guides

Nobody Owns the Whole Workflow. That's the Actual Problem

Most conversations about AI in education collapse into a binary: let AI do it, or don't. This is the wrong frame. Drawing from Sheridan-Verplank's automation taxonomy, we propose four levels of AI control — Suggest, Review, Execute, Autonomous — and a diagnostic for choosing the right level based on blast radius, boundedness, reversibility, and opportunity cost.

7 min read

Key Takeaway

AI in education operates at different control levels. Level 1: chatbots that answer questions. Level 2: copilots that assist tasks. Level 3: AI Staff that autonomously own operational outcomes across systems.

AI control levels framework for higher education

Onboarding a new faculty member at a mid-size university touches six systems, involves at least four departments, and takes an average of 14 working days to complete.

No single person, team, or system owns this workflow from end to end. And that's not a bug in the process — it's a structural feature of how institutions are organized.

The workflow nobody designed

Here's what faculty onboarding actually looks like at most institutions. Not the process document version — the version that runs in practice, across email chains, ticket queues, and hallway conversations.

Step 1: HR creates the employee record and assigns a department. This happens in the HR/ERP system (Workday, PeopleSoft, Banner). Time: Day 1.

Step 2: IT provisions credentials — email account, network access, VPN, building access card. This requires a ticket in the IT service management system. Time: Day 1-3.

Step 3: The IT ticket sits in a queue. Depending on ticket volume and staffing, this might take a day. It might take four. Nobody outside IT knows the status unless someone sends an email to ask.

Step 4: The Registrar or department assigns course sections in the SIS. This can't happen until the employee record exists in HR (Step 1), but it doesn't require IT provisioning to be complete. In practice, it often waits because the department chair wants to confirm the teaching assignment — which happens via email, outside any system.

Step 5: Once course sections are assigned, someone creates instructor accounts in the LMS and enrolls the faculty member in their courses. At some institutions this is automatic; at many, it's a manual process requiring a separate request.

Step 6: Someone — usually a department coordinator or the faculty member themselves — emails the department chair to confirm that course sections, LMS access, and schedules all match. This confirmation loop happens entirely in email, documented nowhere.

Step 7: Previous course shells get copied, content gets updated, materials get uploaded to the LMS. This is often the faculty member's responsibility, but they can't start until Steps 2 and 5 are both complete.

Step 8: Orientation materials, welcome information, benefits enrollment links, and training requirements get sent by email. Sometimes by HR, sometimes by the department, sometimes by both with slightly different information.

Step 9: At some point — usually Day 14 or later — someone checks whether everything actually happened. This is rarely formalized. It's someone's institutional memory that "we should probably check on the new hire."

Six systems. Nine steps. Three points where the workflow stalls waiting for a manual handoff. And zero systems that are aware of the full sequence.

Why this matters more than it looks

Faculty onboarding is not the most critical institutional workflow. It's not high-risk or compliance-sensitive in the way that financial aid processing or accreditation reporting is. I chose it specifically because it's mundane — and because the structural pattern it reveals shows up in virtually every cross-departmental process.

The pattern: institutional workflows are decomposed by system ownership, not by workflow logic.

HR owns the employee record. IT owns provisioning. The Registrar owns course assignment. The LMS team owns the learning environment. Each department optimizes for its part of the process. Nobody optimizes for the whole.

This is a well-documented organizational pattern — Lawrence and Lorsch (1967) identified it as the tension between differentiation and integration in organizations. Departments differentiate to develop specialized competence. But that specialization creates integration problems at the boundaries.

In most institutional contexts, the integration problem is solved by people. A department coordinator who knows that when a new hire starts, they need to email IT, follow up with the Registrar, check the LMS, and send the welcome packet. This coordinator is the integration layer — they carry the workflow logic in their head.

The three costs of people-as-integration

Cost 1: The knowledge is fragile. When the coordinator changes roles, retires, or leaves, the workflow logic goes with them. The next person in the role learns it through trial and error, reconstructing the same institutional knowledge from scratch. This is why institutions experience what feels like "organizational amnesia" every few years — they're not losing data, they're losing workflow logic that was never encoded anywhere.

Cost 2: The workflow doesn't scale. One coordinator can manage the onboarding of two or three faculty per semester. When the institution hires 15 in a semester — adjunct surges, new programs, growth periods — the same coordinator becomes the bottleneck. They're doing the same cross-system coordination, but multiplied. The handoff delays compound.

Cost 3: Nobody can see it. Because the workflow spans systems but isn't represented in any single system, there's no way to measure it. How long does faculty onboarding actually take? Where do the delays happen? Which handoffs fail most often? The answers exist in email timestamps and ticket queues, but nobody is aggregating them. You can't improve what you can't see.

The mental model: process architecture vs. system architecture

Most institutions have invested heavily in system architecture — selecting, implementing, and maintaining the right platforms for each function. HR has a system. IT has a system. The Registrar has a system. These are well-designed, well-maintained, and well-staffed.

What institutions haven't invested in is process architecture — the design of how workflows flow across systems. The "in-between" space. The connective tissue.

This is analogous to a distinction in software engineering between microservices and orchestration. You can have perfectly designed microservices (each system does its job well), but without an orchestration layer that coordinates them, the overall system behavior is unpredictable and fragile.

Most institutional tech stacks have mature microservices and no orchestration layer. People fill the gap. And the gap isn't visible until someone maps the actual workflow — not the process document, but the real sequence of actions, handoffs, and delays.

A diagnostic tool: mapping your cross-system workflows

If you want to understand where your institution has this pattern, here's a practical exercise. Pick any workflow that spans more than two systems and map it using these six dimensions:

1. Systems touched. List every system the workflow interacts with — not just the primary ones, but email, shared drives, spreadsheets, and chat tools. These "shadow systems" often carry critical workflow logic.

2. Handoff points. Where does responsibility transfer from one person, team, or system to another? Each handoff is a potential failure point and a potential delay.

3. Dependency chain. Which steps require prior steps to be complete? Map the critical path — the longest chain of sequential dependencies. This determines the minimum possible time for the workflow.

4. Manual bridges. Where do people carry information between systems that don't talk to each other? This is the integration labor — the work that wouldn't exist if the systems shared context.

5. Verification points. Where does someone check that a prior step was completed correctly? How do they check — by looking in the system, or by asking someone?

6. Time to completion. How long does the workflow actually take, from trigger to done? Not the policy target — the real elapsed time, including queue delays and handoff gaps.

When you map these dimensions, a pattern usually emerges. The raw execution time — the time spent actually doing the work in each system — is a small fraction of the elapsed time. The majority of the time is spent waiting. Waiting for handoffs, waiting for queues, waiting for email responses, waiting for confirmations.

In our faculty onboarding example, the actual work — creating records, provisioning accounts, assigning courses, building course shells — might total three to four hours of human effort. The 14-day elapsed time is almost entirely wait time and coordination overhead.

Infographic 4 Controls Detailed

What changes when you see workflows this way

This framework isn't about technology selection. It's about making visible a structural pattern that most institutions live with but rarely examine.

Once you see that your people are functioning as the integration layer between your systems, several questions become clearer:

  • Which workflows should be mapped and optimized first? Start with the ones that run most frequently and have the most handoff points.
  • Where is institutional knowledge most at risk? The workflows where one or two people carry the cross-system logic in their heads.
  • What would an orchestration layer need to know? It would need read access to multiple systems, awareness of dependencies, and the ability to trigger actions or notifications across system boundaries.

Whether that orchestration layer is a technology platform, a redesigned process, or a combination depends on your institutional context. The first step is the same either way: map what actually happens, not what's supposed to happen.

What this analysis doesn't address

Workflow mapping reveals the problem but doesn't solve the harder question: who should own cross-system processes? If a workflow spans HR, IT, the Registrar, and the LMS team, which department has the authority and budget to optimize it?

In most institutions, the answer is: nobody. Cross-system workflows fall between organizational boundaries the same way they fall between system boundaries. This is a governance challenge, not a technology challenge — and it's one we've seen institutions struggle with regardless of what tools they use.

If your institution has found a good governance model for cross-system workflows, we'd genuinely like to hear about it.


This is from Quad's Frameworks series—practical tools for institutional technology decisions. More at quadhq.ai.


Related reading