The lecture opened the topic of how to set up management systems in social services so they serve the objectives rather than themselves. It was noted that without clear rules, evidence, and shared indicators, it is hard to compare what truly works. The proposed framework aims to connect different models and bring a measurable agreement across practice.
Management systems: a means, not an end
Social services operate at the intersection of social, health, and often also educational agendas, and therefore need clearly defined systems. In practice, legislative standards, self-assessment models such as CAF and EFQM, and requirement schemes like ISO 9001 are used, which introduce rules, audits, and possible certification. Although they take different paths, they all promise the same thing: better quality and reliability of services. The problem arises when the common “link” in the form of indicators is missing, by which one could meaningfully compare the outcomes that the different approaches have produced.
Stakeholders, rules, and evidence
In social services, the network of “who requests, who receives, and who pays” is broad, and the requirements are not always aligned, which complicates setting prices as well as expectations. The speaker emphasized that rules must be workable; otherwise everyone just pretends to follow them and pointless games ensue. Evidence is part of the rules — ideally measurable — so it is not just assertion against assertion. And finally, a “heretical” note: an appropriate level of partners’ dissatisfaction can be healthy, because it keeps the dialogue alive and moves collaboration forward.
Review cycle and shared indicators
The proposed approach forms a simple cycle: clarify the stakeholders, agree on clear and workable rules, define the evidence points, and then review the outcomes as well as the costs. Based on the review, decide what to keep, where to add, and where to reduce. In practice, however, there is resistance to transparent indicators — “why should one organization expose itself when the others don’t?” Without shared, nationally agreed indicators across models, efforts therefore fragment; the framework should be a “light in the tunnel” that does not offend any model but requires honoring the same yardsticks and will make it possible to compare which approach leads to better results.