Compiled by Lubos β Edge AI Auditor & Open Compliance Advisor
Version 1.1 β June 2025
| Term | Definition |
|---|---|
| Edge AI | AI models that run directly on devices (e.g., phones, wearables) rather than the cloud. Enables real-time decisions and improves privacy. |
| Inference | The moment an AI model makes a decision based on input β e.g., voice recognition triggering a response. |
| On-Device Learning | AI adapting to user behavior directly on a device without sending data to a server. |
| Lightweight Model | A small, efficient version of an AI model designed to run on devices with limited memory or compute. |
| Model Compression | Techniques that shrink model size while preserving performance β used to fit AI on embedded systems. |
| Term | Definition |
|---|---|
| EU AI Act | A regulation defining AI risk tiers; high-risk systems must follow documentation, risk control, and monitoring protocols. |
| GDPR | The EUβs data privacy law. Applies to any AI system that handles user data, including edge devices. |
| High-Risk AI | AI systems affecting rights, safety, or access (e.g., biometric ID, healthcare). Must meet strict EU AI Act rules. |
| Conformity Assessment | A documented review process showing your AI meets legal standards. Often required before launch in Europe. |
| CE Certification | EU's product compliance mark. May include AI conformity in regulated sectors like healthcare or toys. |
| Term | Definition |
|---|---|
| AI Audit | An independent evaluation of your AI systemβs risks, fairness, documentation, and regulatory compliance. |
| Bias Detection | Analyzing model outputs to ensure fair treatment across user groups (e.g., no gender or racial bias). |
| Explainability | The ability to understand and describe why an AI made a certain decision β required for accountability. |
| Risk Scoring | Estimating the potential harm an AI system could cause. Helps decide if a system is βhigh risk.β |
| Traceability | Keeping a record of model development, training data, and deployment paths. |
| Provenance | Focused on the origin of data β ensures the dataset is ethically sourced and compliant. |
| Term | Definition |
|---|---|
| Federated Learning | A way to train AI across multiple devices without transferring raw data. Enhances privacy. |
| Privacy-Preserving Inference | Running AI models in ways that donβt expose sensitive user data during prediction. |
| Secure Enclave / TEE | A hardware-based safe zone in a chip where sensitive computation occurs β helps enforce trust. |
| Local Logging | Storing AI actions on-device, not in the cloud. Enables auditability for edge AI. |
| Watermarking | Embedding hidden markers in outputs or models to prove origin or compliance β helps prevent unauthorized use. |
| Term | Definition |
|---|---|
| Compliance Badge | A visual or digital label signifying that an AI system has passed a defined compliance review. |
| Self-Assessment Kit | A lightweight checklist or tool a company uses internally to test for compliance before hiring experts. |
| Trust Layer | The external presentation of safety: documentation, disclaimers, certificates, and UI indicators. |
| Audit Trail | A complete log of what, when, and how an AI system was tested or adjusted. |
| Compliance-as-a-Service | Subscription or consultant-based offerings that manage AI compliance for startups and device makers. |
| Term | Definition |
|---|---|
| Model Card | A structured document that summarizes what a model does, what data it was trained on, and how it should be used. |
| Model Lifecycle Documentation | A formal timeline and audit trail of how an AI system was built, tested, and deployed. |
| Post-Deployment Monitoring | Ongoing observation of AI behavior after launch to detect issues or drifts. |
| Impact Assessment (DPIA/AIA) | A legal evaluation of whether an AI system could harm privacy, fairness, or safety. |
| Sandbox Mode | A secure test environment for safely experimenting with new or updated models. |
| Model Governance | The policies, workflows, and approval checkpoints that control how AI systems are developed and released. |
| Term | Definition |
|---|---|
| Differential Privacy | A privacy technique that injects statistical βnoiseβ to protect individual data during analysis. |
| Data Minimization | A design principle: collect only whatβs needed for the model to function. |
| Synthetic Data | Fake but statistically accurate data used to train or test AI without violating privacy. |
| Shadow Data | Data that is collected or stored outside official systems or policies, creating hidden compliance risks. |
| Consent Management | Systems that record and enforce when users have opted in or out of data collection. |
| Edge Security Stack | A layered defense system β hardware, firmware, and software β protecting on-device AI from attacks. |