Use AI with confidence — not guesswork.
Regulations around artificial intelligence are evolving fast. GDPR obligations don’t pause, the EU AI Act is rolling out, and your employees are already using AI tools — with or without a policy in place. This service helps mid-sized companies get structured, informed, and ready.
AI adoption in companies is outpacing the policies and frameworks designed to govern it. The result isn’t just legal exposure — it’s confusion, inconsistency, and avoidable risk that compounds the longer it’s ignored.
The pain points we see:
AI tools change how personal data flows through your organization. We help you map which tools interact with personal data, identify the applicable legal bases, and document data flows in a way that holds up to scrutiny. This includes reviewing vendor data processing agreements and evaluating whether your current AI usage is aligned with your existing privacy policies.
The EU AI Act classifies AI systems by risk — and your obligations depend entirely on which category applies to your tools and use cases. We help you understand the classification framework, identify which of your AI applications fall under it, and outline the documentation, transparency, and human oversight requirements that follow. No legal opinions — but a clear picture of where you stand and what needs to happen next.
Not all AI tools are created equal when it comes to how they handle your data. We assess the security posture of the platforms you're using: data retention policies, model training opt-outs, vendor contractual protections, and internal access controls. The goal is to give you an informed view of the risk surface — so you can make better procurement and usage decisions.
Before you can govern AI, you need a policy that people can actually follow. We help you draft practical, readable guidelines — acceptable use policies, role-based access considerations, and a governance structure that scales with your organization. No bureaucratic overhead. Just clear rules that protect the company and give employees the direction they need.
A structured process — practical by design, not theoretical.
We start with a structured review of your current AI tool landscape, data flows, and existing policies. The goal is an honest gap analysis: where are you exposed, and what's missing?
Based on the assessment, we build a prioritized action plan. This includes drafted policy documents, a compliance roadmap, and clear recommendations your team can act on — without needing a law degree to understand them.
We hand over everything in a format your team can own and maintain going forward. You're not dependent on us indefinitely — the output is frameworks and documentation that live with your organization.
Ralf Hug and Shifu Marketing are not lawyers and do not provide legal advice. Nothing in this service constitutes legal counsel, a formal compliance certification, or a binding legal opinion.
What we offer is strategic and operational guidance: helping you understand the regulatory landscape, identify where your organization has gaps, ask better questions of your legal counsel, and build internal structures that reflect best practices.
For binding legal interpretations, DPA appointments, or formal compliance sign-off, you will need a qualified attorney or certified Data Protection Officer. We’re happy to help you understand what questions to bring to them.
Think of this engagement as your compliance starting point — not a replacement for legal expertise.
You don’t need to have everything figured out before we talk. Most companies come in with a vague sense that something needs to happen — and leave with a concrete plan for making it happen.
If your company operates in the EU, sells to EU customers, or uses AI systems that affect EU residents, the EU AI Act applies to you — regardless of where you’re headquartered. The key question is which risk category your AI systems fall into, as that determines your specific obligations. We help you work through exactly that.
Existing GDPR policies typically don’t account for how AI tools process, store, or potentially train on personal data. Most standard policies were written before AI tools became part of everyday workflows. We assess whether your current documentation covers your actual AI usage — and close the gaps where it doesn’t.
Most engagements start with a structured assessment that takes 1–2 weeks, depending on the size of your organization and the number of AI tools in use. From there, the time to build out policies and a governance framework depends on scope — but most mid-sized companies have a working set of documents within 4–6 weeks.
Most engagements start with a structured assessment that takes 1–2 weeks, depending on the size of your organization and the number of AI tools in use. From there, the time to build out policies and a governance framework depends on scope — but most mid-sized companies have a working set of documents within 4–6 weeks.
No — and this is actually the most common situation we see. Retroactive governance is very achievable. The starting point is always an honest inventory of what tools are in use and how, followed by a framework that formalizes what’s working and addresses what isn’t. Starting late is far better than not starting at all.
We use essential cookies to ensure the proper functioning of this website. With your consent, we also use analytics and marketing technologies (e.g. HubSpot, YouTube) to improve user experience and measure engagement. You can adjust your preferences at any time.
