How AI Has Changed My Security Engineering Workflow
Cybersec Café #91 - 05/12/26
Cybersec Café community - it’s been a while
Six months, to be exact.
A lot has happened in that time. New job. A front-row seat to the chaos and acceleration of AI. And experimenting with new forms of content (you might’ve seen me on IG or TikTok).
But honestly… it feels good to be back. Sitting down, reflecting, writing - getting my thoughts out again.
Today, I want to talk about how Security Engineering has changed - fast - because of AI.
In just a few months, we’ve seen massive leaps in capabilities: skills, knowledge bases acting as brains, advanced reasoning, and ever expanding context windows.
And these shifts are actively reshaping how the job is done.
I want to contrast what the role looked like just 6-8 months ago vs. what it looks like now. And why, even with all the hype around tools like Anthropic’s Mythos and the sea of other AI tools - cybersecurity is still one of the best careers you can be building right now.
- Today’s Sponsor -
Stop studying for your fifth certification.
Employers don't want proof you can pass a test. They want proof you can do the job - write a detection, triage an alert, lead an incident, conduct a threat hunt.
That's what Defend the Org is built for. Hands-on labs based on the skills you’ll actually use on the job - built from the ground up by blue teamers.
Whether you’re trying to pivot into cybersecurity, land your first role, or upskill in your current one - start getting reps in at Defend the Org.
$29/month or $300/year $20/month or $200/year for a limited time.
Security Incidents
SIEM Alerts are still a thing of the present and future. But the way we triage, escalate, and remediate them is shifting.
Automation and AI reasoning has made reducing throughput less of a concern, and put an emphasis on deciding where human intervention is required.
Before
A SIEM alert fires - and everything starts with the analyst.
You’d run queries to build context: normal activity, login locations, user role, recent behavior.
From there, it was a mix of experience and whatever playbooks existed (if they existed at all).
The analyst would:
write queries
stitch together a timeline
interpret the data
and ultimately decide: true positive or false positive
If it escalated, an incident would be declared and handed off to Incident Response.
From there, the process was still heavily manual: tracking timelines, coordinating actions, writing updates, and eventually documenting the full incident lifecycle.
End-to-end, it was a deeply hands-on process.
Running an incident wasn’t just technical - it was a full-time coordination effort.
Now
What used to take hours of manual effort now starts with AI.
Alerts are triaged by agents that integrate directly into your tooling through MCP servers - pulling logs, correlating signals, and building context automatically.
With reasoning models like Claude Opus, a basic playbook is no longer static - it’s something that can be iterated on in real time.
In minutes, you can get:
a structured timeline
correlated evidence across systems
and an initial determination
Faster than any human analyst could realistically produce.
The analyst’s role shifts from doing the work to validating and directing it.
If deeper analysis is needed, you don’t jump back into manual querying - you instruct the agent to go further.
If it’s a true positive, escalation still happens - but now you’re working alongside specialized agents across the incident response lifecycle:
containment suggestions
automated remediation steps
real-time documentation
even post-incident write-ups
Are hallucinations and false positives still a concern?
Yes - but with a human in the loop, the upside is too hard to ignore.
We’re even seeing scheduled workflows that:
build timelines automatically
generate incident reports
and propose improvement actions before the incident is even closed
Security incidents haven’t become easier - the way we approach them has changed.
Less hands-on keyboard work. More high-level thinking, validation, and orchestration.
Detection Engineering
Just over a year ago, I wrote my most popular article: to date: My SIEM-Agnostic Creative Process to Detection Engineering.
And while the core ideas still hold (how to think about coverage, how to create value), the way I approach the technical process behind Detection Engineering couldn’t be more different.
Before
Detection ideas came from everywhere:
OSINT
incident learnings
threat models
or the need to cover a new log source
From there, the process was… manual.
You’d dig through documentation, search open-source detection repositories, and piece together ideas that might work in your environment
There was a lot of strategy involved - making sure coverage was meaningful, not just surface-level.
My personal workflow always included a mini threat hunt: explore the log source, understand what “normal” looked like, and identify high-leverage behaviors that might’ve been missed.
Then came the build phase:
translate ideas into detection logic
write unit tests
deploy and monitor over weeks
Tuning was inevitable - balancing signal vs. noise, adjusting thresholds, refining logic.
It was a craft. And it took time.
Now
The barrier to building detections has collapsed.
With modern AI, detection engineering has shifted from manual construction to guided generation.
In a single afternoon, I can generate multiple detections, create unit tests, and produce full documentation of the attack surface being covered.
As long as I’ve clearly drafted my use case.
I don’t even need a deep, upfront understanding of the log source or its schema anymore.
With access to my datalake via MCP and the ability to parse documentation instantly, AI can explore the data, identify patterns, and propose detection strategies for me - significantly reducing the front-loaded phases that used to come with detection engineering.
Whether I’m building a detection suite from scratch, identifying coverage gaps, or looking for exception opportunities - the model can generate a structured plan in minutes.
From there, it’s a short step to production:
convert to detection-as-code
attach metadata and tests
pass CI
deploy
Are these detections always deeply complex out of the gate? Not necessarily.
But with the right inputs and iteration, these models can produce detection logic that rivals, or even exceeds what most engineers could write manually.
Faster too.
The tradeoff is the same as everywhere else: you still need a human in the loop. But the time to ship has become insane to think about.
Detection engineering is no longer bottlenecked by creation. It’s bottlenecked by judgment, testing, and monitoring.
The time-to-value is faster than anything this field has seen before.
Runbooks
Runbooks used to be a necessary evil.
Evil because no one liked writing them.
Necessary because they could dramatically reduce time to remediation.
AI has completely flipped that dynamic.
Before
Runbooks were written by humans, for humans.
They followed rigid, workflow-style logic:
if this → then that
branching decision trees
copy/paste queries into the SIEM
step-by-step paths from alert to resolution
They were structured, but brittle. And keeping them up to date was a constant struggle.
Every triaged alert should have fed back into improving the runbook to ensure it was current. But in reality, that work often fell behind. Manual upkeep rarely wins against more urgent tasks.
And coverage was always incomplete. Runbooks typically existed for the most common scenarios or the most critical alerts.
Everything else? You’re on your own.
That 6am medium-severity alert that suddenly escalates… no runbook, no guide - just the analyst trying to figure it out, and often resorting to pulling in a more senior engineer for assistance.
Necessary? Absolutely. Loved? Depends who you ask - the person using it, or the person writing it?
Now
Runbooks are no longer written for humans. They’re written for machines, by machines - typically as structured, deeply detailed markdown.
Designed end-to-end by agents.
The format hasn’t fundamentally changed.
However, instead of manually crafting every branch and edge case, you can:
feed in detections
reference your knowledge base
and let an LLM generate a near-complete runbook
In just a few minutes.
These runbooks essentially function as executable logic for agents:
how to triage
how to enrich
how to escalate
how to remediate
And because they’re machine-consumable, they scale in a way human-written runbooks never could.
The human role doesn’t disappear - it shifts.
You’re no longer writing from scratch. You’re reviewing, drafting refining instructions, and validating output.
LLMs still struggle with higher-level judgment, but they’re more than capable of getting runbooks most, if not all, of the way there.
And when paired with detection-as-code, the impact compounds. When you have your detections and runbooks all living as code under the same roof, your agents now have access to your entire suite as a knowledge base to reference against.
In a modern security team, “everything as code” isn’t aspirational - it’s become the standard.
Automations and Workflows
Security teams have relied on deterministic workflows for ages.
But AI has made a lot of that infrastructure feel… outdated.
Who needs to create custom functions to call APIs anymore when MCPs exist?
Before
What we now call “agentic workflows” used to exist as automations — primarily through SOAR platforms.
And to be fair, SOAR was powerful.
It gave teams the ability to automate triage steps, enrich alerts, trigger response actions - but everything was deterministic.
If you didn’t explicitly define a step, it didn’t happen.
Which meant every detection needed its own workflow, every query had to be written ahead of time, and every edge case had to be anticipated.
Even when detections were similar, workflows had to be manually reviewed and adapted. Scaling this across a growing detection suite was a massive lift.
Unless you built it perfectly from day one, maintaining it became its own burden.
With that said, when it worked, it really worked - and it felt magical at the time.
At a previous company, after fully implementing a SOAR platform aligned with our detection suite, we saw:
~80% reduction in average time to resolution
hours of analyst time saved per week (can’t remember the exact amount)
For its time, SOAR was ahead of the curve - and we must tip our hats to those that came before AI.
Now
The shift from automations to skills is a step up, not just an iteration.
Skills are the building blocks of modern, agentic workflows. And in security engineering, they’re delivering the same kind of impact SOAR did at first, just amplified by at least 10.
Any repeatable part of your workflow should be a skill. If it’s not, you’re leaving leverage on the table.
The difference now is flexibility. Instead of hardcoding workflows, you:
define capabilities (skills)
give agents access to tools (via MCP)
and let them decide how to execute
These systems are no longer waiting for exact instructions, they’re reasoning through the problem space.
In practice, this looks like:
Every alert triggering an automated triage workflow
→ reading the runbook
→ gathering context
→ closing benign positives automatically
→ or escalating when neededIncidents invoking specialized skills
→ handling repeatable response actions
→ coordinating investigation stepsEven PR reviews becoming a target for automation
→ reducing friction in how quickly teams can ship
And the barrier to entry is lower than ever.
You can describe a skill in plain language, have a model generate it, schedule or event-trigger it, and deploy it in minutes
What used to take weeks of engineering effort now takes an afternoon.
Less hands-on keyboard. Less rigid workflow design.
More orchestration. More system-level thinking.
The work hasn’t just disappeared into thin air, it’s moving up a layer and requiring us engineers to adjust with it.
Multi-tasking
AI has changed how the world thinks about productivity. It’s enabled a form of multi-tasking that just wasn’t possible before.
Before
Attention and bandwidth were everything.
A single critical alert could consume an analyst’s entire day. Two at once? You’re probably underwater.
A Sev1 incident could derail an entire week for an incident responder.
A detection might take days to move from idea, through development, just to hit the staging environment - not even production yet.
Even experienced engineers, the ones who could juggle both engineering and operations, were still constrained.
Most were effectively siloed to: one incident and one project or initiative at a time
Not because we couldn’t think across multiple problems, but because execution was limited by what you could physically do, by hand, in a sprint.
And context switching? Expensive.
Switching between tasks too frequently usually slowed you down more than it helped. So the solution was structure: block time, focus deeply, close out tickets sequentially.
You could balance types of work (ops + engineering), but rarely multiple engineering efforts in parallel.
Now
Now it feels like everything runs in parallel. Because, in a way, it does.
The biggest shift for me was to think of every task as if I have an intern assigned to it.
I need to give that intern clear instructions, but the could set it free to run in the background and it would come back to me with a structured output or plan a few minutes later.
All while I can move on to work on something else.
A typical workflow might look like:
Kick off an alert investigation skill
Spin up a detection development workflow
Actively working on a separate engineering project
All at the same time. It’s honestly insane to just think about.
The mental model has changed to “If I’m actively working on something, I should also have agents working passively in the background.”
If the tokens are there, you should be using them.
But this shift comes with a cost.
The volume of output has increased dramatically. What used to fill a two-week sprint can now happen in days.
And that compression creates a new kind of pressure - more decisions. More context to track. More high-level thinking, more often.
The low-level work has been abstracted away. But the cognitive load hasn’t increased.
Engineers aren’t necessarily doing more work… but they are operating at a consistently higher level of intensity.
And that’s something I don’t think we’ve fully adapted to yet.
The Cybersec Café Discord is officially live! Join a growing community of cybersecurity professionals who are serious about leveling up. Connect, collaborate, and grow your skills with others on the same journey. From live events to real-world security discussions — this is where the next generation of defenders connects. Join for free below.
So where is security engineering headed?
Nowhere.
At least not in the way people try to fear-monger you into thinking.
The release of Mythos is an example - an advanced reasoning model to find vulnerabilities at a speed no one can fathom. An amazing feat of engineering nonetheless.
But cybersecurity has never been just about code vulnerabilities. That’s only a small slice of the problem.
The real value is in domain knowledge:
Understanding how attackers think
Knowing how systems fail
Being able to analyze and interpret data (and use LLMs to assist you)
Those skills are becoming more valuable - not less. And if you’re considering a career shift to cybersecurity, I think it’s worth understanding where the leverage is shifting.
Right now, I’d place my bets on areas like:
Detection & Response: An end-to-end understanding of your environment - how to detect, investigate, and mitigate threats. AI can assist here, but it still struggles to fully grasp the nuance and context required to make the right calls.
Infrastructure Security: We’re operating in a cloud-first, AI-driven world. Infrastructure is evolving fast, and securing it requires engineers who understand both the systems and the strategy behind them.
Application Security: AI is getting very good at finding vulnerabilities. But security doesn’t stop there. It’s about embedding security into the development lifecycle, driving remediation, and building systems that are secure by design.
The role of the security engineer isn’t being replaced, just elevated.
Less hands-on execution. More judgment. More strategy.
The opportunity is expanding for those willing to adapt.
–
On a final note, while I don’t plan on writing as frequently as before - it feels great to be pressing publish on another article at the Cybersec Café.
I’m looking forward to showing up in your inbox more consistently throughout 2026 than I did in Q1.
Thanks for continuing to follow along at the Cybersec Café!
Securely Yours,
Ryan G. Cox
P.S. The Cybersec Cafe delivers Deep Dives on a cybersecurity topic designed to sharpen your perspective, strengthen your technical edge, and support your growth as a professional - straight to your inbox.
. . .
For more insights and updates between issues, you can always find me on Twitter/X, Instagram, TikTok, YouTube, or my Website. Let’s keep learning, sharing, and leveling up together.





