1. Our Commitment
At Synergy AI, we are committed to developing and deploying artificial intelligence responsibly. This document outlines our principles, practices, and commitments for AI governance in alignment with the EU AI Act and industry best practices.
2. AI System Classification
2.1 Risk Assessment
Under the EU AI Act framework, our AI systems are classified as limited riskapplications. Our platform provides decision-support tools for M&A professionals, with all final decisions made by qualified humans.
2.2 Transparency Obligations
AI Disclosure: All AI-generated content on our platform is clearly labeled. Users are informed when interacting with AI systems.
3. Core Principles
3.1 Human Oversight
- AI outputs are advisory, not deterministic
- All generated content requires human review before use
- Users maintain full control over final decisions
- Clear escalation paths to human support
3.2 Transparency
- Clear disclosure of AI involvement in outputs
- Explainable recommendations where feasible
- Documentation of AI capabilities and limitations
- Regular accuracy assessments
3.3 Fairness
- No discrimination based on protected characteristics
- Regular bias audits of AI outputs
- Diverse training data considerations
- Equal access to AI features across user tiers
3.4 Privacy
- Minimal data collection principle
- User data never used to train models without consent
- Secure processing within EU infrastructure
- Clear data retention policies
3.5 Accountability
- Designated AI governance team
- Regular internal audits
- Incident response procedures
- Feedback mechanisms for users
4. AI Limitations
Important Disclaimer: Our AI systems have inherent limitations:
- Outputs may contain errors or inaccuracies
- AI cannot replace professional judgment
- Market conditions change faster than AI can adapt
- Complex nuances may not be captured
Always verify AI-generated content before relying on it for business decisions.
5. Technical Safeguards
5.1 Model Management
- Version control for all AI models
- Testing before deployment
- Performance monitoring
- Rollback capabilities
5.2 Content Filtering
- Input validation and sanitization
- Output quality checks
- Harmful content detection
- Confidentiality protections
6. Third-Party AI Providers
We use third-party AI services (currently Google Gemini) with appropriate contractual safeguards:
- Data processing agreements in place
- EU data residency requirements
- No model training on customer data
- Regular security assessments
7. User Rights
Users of our AI systems have the right to:
- Know when AI is being used
- Understand how AI influences outputs
- Challenge AI-generated results
- Request human review of AI decisions
- Opt out of specific AI features
- Report concerns about AI behavior
8. Continuous Improvement
We are committed to:
- Regular reviews of AI governance practices
- Staying current with regulatory developments
- Incorporating user feedback
- Industry best practice adoption
- Transparency in our progress
9. Reporting Concerns
If you have concerns about our AI systems, please contact us:
AI Governance Team
Email: support@masynergy.eu
All reports are reviewed and addressed within 5 business days.
10. EU AI Act Compliance
We actively monitor the implementation of the EU AI Act and are committed to full compliance. This includes:
- Risk-based assessment of all AI systems
- Documentation and record-keeping requirements
- Conformity assessments where required
- Cooperation with regulatory authorities