How to Train Devs to Build Safer Code

How to Train Devs to Build Safer Code
AI coding tools can speed up development, but they can also increase security oversights. Here's how to shift security left and build guardrails that catch issues early.

The New Reality of Software Development

As AI becomes more mainstream, it's clear there's no debating if AI is here to stay; no one sees it leaving. I believe the decision we should focus on is how AI will be used in our day-to-day lives as developers. We've moved past "AI will take our jobs" to "it's not there yet, but it's a good tool." While this is true, it comes with caveats. It's a good tool if you know what you're doing and understand AI's limitations.

Yes, Copilot, Cursor, Bolt, and Claude are all great tools to speed up your development workflow, but they can also increase the things web developers tend to overlook, such as overlooking security best practices.

There are countless examples of AI-generated code hallucinating or referencing methods that never existed. Often, when working with these AI coding tools, we outline the happy path and make it work. But as you gain more experience as a developer, you learn that the real job is going through the non-happy paths. It's about thinking through edge cases, how to build to scale, and how to do this securely.

One of the best ways to do this has been to leverage people with specialized skill sets and break up the work so that our products include solutions beyond just the code. Well-rounded applications include aspects of security, accessibility, and stability.

But it's 2025; we vibe and code, right? Well, if you want your product to become a security risk that the internet tries to take down like they did with Leo, who posted on Twitter that his new SaaS platform was built using AI tools like Cursor and was made an example of by hackers.

For most developers, we see securing secret keys and requiring authentication for API endpoints as basic coding, but AI won't. The same way that someone new to the industry wouldn't know these best security practices either.

We need to treat AI the same way we treat onboarding and training interns and juniors. We need to be explicit with the requirements and create guardrails and checkpoints as far left into the process as possible. That way, the vibe can stay high as we avoid becoming the next tech meme on Reddit.

What "Shift Left on Security" Really Means

Shifting left: What is it, and why should you care?

Let's say two weeks before a project deadline, stakeholders reach out and inform your team that, to go live, a new workflow is required. This new workflow will fundamentally change everything the team spent the last few months building. After being faced with the ultimatum of working overtime or an extremely late deadline, you might ask yourself, how did no one discover this before?

This is why the connection of shifting left has become a trend in the product cycle. Product cycles can be broken down into the far left, the part of project planning before any development is done, and the far right, when the team is deep into the code and things are just about done.

The core of the shift left theory centers around the idea that the more we poke holes in what we think we're building, the less likely we are to have to make a huge and expensive pivot once we move to the right of the process. If the goal is to find things we typically overlook early in the process, security should shift to the far left so we can improve the success of our products.

In my experience, shifting security left does three things:

  1. Reduces far-right issues, something we're all hyper-aware of
  2. Empowers developers to own security instead of just partnering with security
  3. Moves from building features to building secure systems

Calling out our security concerns makes the team proactive about preventing security risks. While it's still important to partner with the security department to analyze security risks properly, the more we can encourage developers to question whether what they are building is secure, the better off technology will be as a whole.

Early exposure to security requirements will allow the team to think of out-of-the-box solutions that take the team from building a feature to building secure systems.

Building a Security-First Mindset in Your Dev Workflow

Building security-first products requires more than tools; you must check in with your security team after completing development work. You need a workflow that keeps security at the forefront of your mind from the idea stage to maintenance mode.

Define the Problem, Create the Design, Complete Dev Discovery

Once the team has clearly defined the problem and outlined a clear solution, you want to involve security in the loop. The security teams will have enough information to highlight major risks and guide discussions around compliance, secure architecture, and known threat models.

Security Checking

You'll find countless resources that suggest describing this stage as a space to focus on balancing security requirements, resource constraints, and technical feasibility. But a key element often overlooked is developer curiosity.

Encouraging engineers to engage with the security team early shifts the mindset from security being a checklist item to shared responsibility. The more developers can understand the attacks that security is trying to prevent, the more prepared they'll be to build with security in mind.

At this stage, most guidance will tell you to balance security requirements, resourcing, and technical feasibility. Still, this stage is a time for developer curiosity. Developers should be encouraged to ask questions to better understand what attacks we're trying to avoid.

Slightly shifting the mindset here will help turn developers into a red team. Team leads should be treating this session as a mini red team exercise focused on training your engineers to poke holes in their work before it ever gets built, prompting them to ask questions like:

  • What could go wrong if someone abused this feature?
  • What would a malicious user try to do?
  • Could this input be manipulated?
  • Is there a way to gain more access than intended?

This type of environment can improve cross-team collaboration by building empathy with security teams. Now, the whole dev team is starting to think like a hacker, moving from feature development to defensive coding.

When developers understand why an attack might happen, they're more likely to write code that prevents it.

Secure by Test: Automating Your Defenses

The red team-style thinking developed in the security meeting shouldn't stay trapped in slide decks or postmortems. It should inform how your team writes tests. Threats identified in any meeting should reflect that awareness.

Every hole the team is able to poke in your system becomes a real-world scenario around what should be validated, blocked, or flagged in your automated tests. Your test becomes more than checking that "the code works in prod;" it's proof that your code is safe.

Making space for developer curiosity promotes better architecture and shapes your team's security practices. When you include your product's potential vulnerabilities as part of your testing requirements, you create security guardrails that stretch from development to deployment and all the way through maintenance.

Common Unit Tests Include:

  • Input validation: Ensure only expected data types, lengths, and formats are allowed.
  • Data sanitization: Strip or encode harmful inputs that could lead to injection attacks.
  • Authentication/authorization checks: Verify access controls and permission boundaries.
  • File upload validation: Confirm file type, size, and location restrictions are enforced.
  • Dangerous URL parameters: Watch for parameters like page, file, template
  • Application behavior under bad or unexpected input
  • Edge case scenarios from past incidents or security reviews

The power of these tests comes when you blend them with the real-world scenarios outlined in the security conversation.

Example: Preventing Business Logic Abuse

Let's say the team is building a feature that gives users free credits the first time they submit a form. In the conversation with security, we identified business logic abuse, where a user might be able to submit a form multiple times to get free credits.

Test ideas:

  • Write unit tests to validate backend guards on usage limits, submission windows, etc.
  • Test rate limiting.
  • Ensure forms and endpoints reject duplicate requests when appropriate.

These types of tests often fall under integration or security testing, but they overlap with unit tests in important ways. The main one is that the earlier you can catch an issue, the better.

Logging to Alert the Right People

  • Create logs for threat thresholds. Catch the yellow flags before they turn into red flag P0s.
  • Craft detailed logs with the appropriate level of granularity and clarity.
  • Define workflows early: who gets notified, how, and what the response plan is.

Getting observability right for security risks can make or break your system. You can plan and test all you want, but the effort means little if you can't alert the right people when it matters.

I get it; logging and alerting are hard to get right. It's a balancing act between capturing what matters without creating too much noise to track and making it hard to filter critical alerts properly.

The key to getting this right is to identify the threshold for risk in the first security check-in and build those measurement-based alerts into the systems. For example, you may want to be notified if someone repeatedly triggers a rate-limited action, which might not be a hack but could signal misuse or testing of vulnerabilities.

Once you understand that threshold, you can update and create critical alerts using platforms like Sentry to identify security risks. You can set up your system to leverage Slack or email to ensure that when that level of alert surfaces, the right chain of command is notified to keep your systems secure.

Secure Code Is Product Quality

Whether you follow the vibe and code philosophy, security should always determine product quality. The more secure your system is, the higher quality your product will be in the long run. Following even a few of these steps can help your team build guardrails that support junior developers and catch AI-generated issues early.

No matter if you hand-code everything or use code-assist agents, shifting security to the left will help you ship faster and safer.


Need help building security into your development workflow? Book a discovery call to discuss secure architecture for your team.