I am writing this article as the founder and owner of a small to medium-sized software company. At erminas, we develop software for industrial digitalisation. Our solutions are used by customers in productive environments – and that means responsibility. Responsibility for functioning systems, for security and for trust.
We are not a large corporation with our own security department. And that applies to many SMEs. Nevertheless, we bear the same responsibility as the big players – towards our customers, our employees and our society.
What cybersecurity looks like in everyday life
In many medium-sized companies, security is not a separate team. Security issues arise in the midst of everyday life – between deadlines, customer requests and ongoing operations. Mostly inconspicuously, until something happens.
Then comes the headline. A critical security breach. Everyone asks themselves:
- Are we affected?
- Does this affect our customers?
- Do we have this under control?
And suddenly everything is urgent.
Stress arises when structures are lacking
Patching is not the problem – it’s the process of getting there.
- Which software component do we use?
- Which versions are affected?
- Where exactly?
- Which customers are affected?
- Is the fix already in place?
- Has it been delivered?
Without clear structures, every security breach becomes a stress test. Instead of reacting calmly, you find yourself having to explain yourself – both internally and externally. This leads to uncertainty, avoidable stress and sometimes even a loss of trust.
The Cyber Resilience Act: not an adversary, but a help
The Cyber Resilience Act (CRA) provides a legal framework that demands transparency and responsibility. For us, this is not a threat scenario, but an opportunity:
- Clear processes instead of ad hoc reactions
- Transparent information for customers
- Well-founded decisions instead of gut feelings
The CRA does not force anyone to be perfect. But it does demand traceability – and that can be achieved with good teamwork and smart tools.
In practice, this often means combining a few well-established approaches rather than building a heavy security organization. Many teams rely on Security Champions to embed security awareness into day-to-day work and act as local points of contact. Regular security awareness training helps ensure that risks and responsibilities are understood across the organization. On a structural level, frameworks such as the NIST Cybersecurity Framework provide a common language for documenting decisions, processes, and responsibilities – which is exactly what traceability under the CRA is about.
What this means for our work – A realistic scenario from our everyday life
Let’s imagine that a critical security vulnerability is discovered in a widely used open source library that is also used in our products. In the past, this would have triggered a chain reaction: Who uses what? Where? Which version? Has it been fixed yet? Who needs to be informed?
Today, the process is more structured – thanks to clear processes and a shared understanding of responsibility.
Step 1: Overview via the SBOM
Our software contains a Software Bill of Materials (SBOM) for each release. This allows us to see immediately:
- Whether the affected library is used at all
- In which version
- In which products
- And: Which customer installations are specifically affected
This reduces the potential circle from ‘all’ to ‘these five installations’. This creates focus – and saves valuable time.
Step 2: Check patch status
The next question: Is there already a patch? If so:
- Have we already integrated it?
- Is it part of a release?
- Has this release already been delivered to the affected customers?
This can also be tracked – not always automatically, but transparently documented. And if no patch exists yet, we at least know where we need to prioritise.
Step 3: Include the threat model
Not every vulnerability is automatically critical in real-world use. That’s why we also evaluate:
- Is the affected function even used by us?
- Is the system exposed to the outside world?
- Could external access to the vulnerability even take place?
This threat model helps to avoid panic and set priorities correctly. There is a difference between something being theoretically vulnerable and practically vulnerable.
Step 4: Communicate with the customer
On this basis, we can communicate confidently – not evasively, but clearly:
- ‘Your system uses the library, but not the affected version – no action is required.’
- ‘Yes, the vulnerability affects you. We have tested the patched version internally and will deliver it tomorrow.’
- Or: ‘No fix is currently available, but your specific usage scenario is not vulnerable. We are monitoring the situation closely and will keep you informed.’
These conversations are very different from before: instead of uncertainty, we show clarity. Instead of technical explanations, we convey security. This strengthens trust – especially in critical moments.
That’s a difference. And it shows in customer relationships: trust grows when we take responsibility and act in a transparent manner.
Responding better together – in the CRACoWi project
In the EU project CRACoWi, we are developing pragmatic approaches for precisely this purpose. Not with the expectation that SMEs will do everything perfectly right away, but with the goal of enabling them to take action in the first place.
- Sharing responsibility instead of passing it on
- Learning together
- Using tools that work in everyday life
This is in line with our attitude: we want to grow together, work fairly and remain human – even in stressful situations.
Conclusion: cyber security is teamwork
Secure software is not a luxury, but part of our responsibility. The CRA can help us to work in a more organised, calm and transparent manner – without any glossy strategies.
Creating structures today reduces stress tomorrow.
Written by Yvette Teiken, erminas






