Skip to main content

      The year 2026 marks the shift from declarative regulations to regulations that are effectively enforced. The directive on transparency, the use of artificial intelligence in HR processes, data protection, and the expanded rights related to work flexibility—all require more than well-formulated policies. They require real implementation mechanisms, observable day by day, in the concrete relationship between employees, managers, and the organization.

      Increasingly often, the difference between “compliant” and “non-compliant” is no longer determined by the existence of a document or a policy, but by how decisions are experienced at the operational level: who has access to data, what is automated, what is monitored, what can be challenged, and what cannot.

      In this context, the need for explicit boundaries in the use of AI is becoming ever clearer. Their absence is no longer just an abstract ethical issue, but one of organizational trust. When technology shifts from supporting work to observing, evaluating, and replicating human work, the risk is that AI will be perceived not as a tool for efficiency, but as a source of anxiety, suspicion, and psychological withdrawal.

      Read more here.

      Mădălina Racovițan

      Partener Consultanță Fiscală, Head of People Services, Co-Head of Consumer, Retail & Leisure

      KPMG in Romania

      Request for Proposal

      Blue white pink illuminating lines