Picture (by Mario Purisic) shows the silhouettes of a large group of people standing and sitting on a beach at sunset.
Best Practice, Accessibility

What makes a system socially acceptable?

Back in the olden days – and actually up until quite recently – IT systems were first and foremost designed in accordance with what engineers thought of as sensible, efficient, and really cool. Now that wasn't the worst way to go about things, and the talented, self-assertive engineers managed to create a number of great products and services – but there was also a massive problem: Engineers also have their blind spots, and just because they think a system is fantastic, it doesn't mean that other professionals or society in general share that view. Which is why a lot of products and services failed. Enter user-centered design, human-centered design – and the need for socially acceptable systems. In this post, we'll briefly explore what kind of so-called non-functional requirements a state-of-the-art, interactive and agent-driven IT platform needs to meet in order to be recognized and welcomed by basically everyone.

Our checklist is based on R&D work for the SERMAS project, but can probably serve as a blueprint  for any service system that allows users to interact via text / voice / gestures and offers bespoke information. The list was distilled following a consortium session with cognitive scientist Christina Iani, who had previously pointed to several resources (like this one or this one or this one) and to the fact that "acceptance" assumes many different connotations. Let's cut to the chase.

Acceptance of agents

In general, modern platforms should satisfy the goals and needs of all users and operate in full compliance with their social context. They should be transparent, safe, secure, explainable, and trusted. Their agents (= bits of software that act on behalf of the user and other systems) ought to elicit acceptability (i.e., a positive a priori evaluation by users when confronted with the system), which will then lead to general acceptance (i.e., a positive and long-term ex-post assessment)

More specifically, there needs to be

  • Physical acceptance, i.e. users must perceive the agent as likable and credible based on its appearance, its guise, its exterior

  • Behavioral acceptance, i.e. users must perceive the agent's non-verbal communication as believable and the interaction as fluent, natural, and pleasant

  • Functional acceptance, i.e. users must perceive the agent as easy-to-use, useful, accurate, and innovative

  • (Specific) social acceptance, i.e. users must sense a social entity in the agent, deem it capable of performing social behavior, and are satisfied with what other people think of the human-machine interaction

  • Cultural acceptance, i.e. users must accept the agent because it complies with their culture in general (educational values, tech-savviness etc.)

  • Representational acceptance, i.e users must think of and refer to artificial agents in a positive way

Acceptance based on broader social concerns

Apart from this (already multi-faceted) requirement of social acceptance that focuses on the system's agents, proxies, robots – call them what you like – there are also obligations based on much broader social concerns:

They can be summarized in the following terms:

  • Compliance – our system must correspond with the rules of DW, the laws of Germany, and the regulations of the European Union

  • Ethics – our systems must be civil, respectful, inclusive, privacy-friendly, transparent, trustworthy

  • Performance – our system should be fast, safe, reliable, robust, scalable

  • Economics – our system should be innovative, realizable on designated budgets, and potentially marketable

  • Sustainability – our system should be sufficiently documented, generally compatible and modular

  • Eco-Friendliness – our system must be non-toxic, as energy-efficient and resource-saving as possible, durable, and recyclable

Non-acceptance

While this blueprint of social acceptance requirements is possibly flawed or incomplete (please send your comments), one thing seems to be quite certain: Every potential idea, concept, and component that does NOT comply with the principles laid out here is automatically restricted. No matter what a self-assertive engineer might tell you.

Author
team_alexander_plaum.jpg
Alexander Plaum