top of page
  • Writer's picturePanos Moutafis, Ph.D.

Is facial analysis inherently wrong?


Our ethical facial analysis service was used at a leading industry conference to offer a deeper understanding of attendee satisfaction.


It sparked an online debate, with data privacy being the biggest concern. In order to help everyone learn more, let's start with a core definition.


Data privacy is the ability of individuals to control their personally identifiable information.

Face recognition identifies individuals and induces a risk to data privacy. Facial analysis computes anonymous statistics without risk to data privacy.



Facial analysis is often conflated with face recognition, but they are different.



With the proper definitions established, we may dive deeper into the most frequent concerns about using this form of non-identifying facial analysis.


After all, nobody in their right mind wants to repeat the mistakes we have seen with online marketing or face recognition deployed by bad actors.


Concern 1: I don't want to be analyzed

When the analytics obtained from a service (any service) cannot be tied to a specific individual, it does not infringe on their data privacy.


Some tools analyze a designated camera feed instantly — without storing or transmitting the footage — and without computing personal identifiers.



Edge computing is designed to protect privacy.



In these instances, it is mathematically impossible to identify a person from the statistics. One may only understand high-level trends.


The community must support such solutions that implement privacy by design to protect our rights.

It is also worth pointing out that business events are spaces of high visibility. Attendees wear a badge around their necks showing their names.


They agree to be photographed (even when not paying attention) and have these pictures published online on social media and more.


They provide deeply sensitive personal data to registration companies and walk into venues with widespread CCTV systems.

Concern 2: Advance notice about the service

As a company leading the development of ethical AI, we want our partners to reference and explain our privacy policy in their communications.

Our AI service neither stores nor transmits any personally identifiable information — all data is aggregated and anonymized at the source.


Even though there is no material risk to people’s privacy, we advocate for advance notice because it is the best way to build trust in the community.

As a general comment about consent to be recorded, CCTV systems exist in all public spaces, along with disclosures about camera surveillance.

People know they are recorded in public. But unlike systems that monitor your every move, Zenus does not capture, store, or transmit personal data.

Educating the audience is the best way to explain that and enhance trust.

Concern 3: The system does not do what we are told

It does — this is a technical field with nuances that make a big difference. Yours truly has spent over a decade working on this domain alone.


There are published journals, benchmarks, and legal frameworks we have to follow—numerous blind tests from academics and practitioners.


Enterprise professionals might also be familiar with GDPR rules, third-party penetration tests, and SOC 2 standards, among other resources.


The service complies with all of the above and more.


When in doubt, one should reach out and start a conversation. And this is exactly what we do — extend an offer for public debate and questions.


Education is the way to go, and we are excited to have the conversation!


Concern 4: Decisions shouldn't be made with AI


Knowledgeable professionals always recommend that users consider multiple sources of information, including surveys.


Facial analysis offers another layer of data we never had before. Organizers and brands can make informed decisions to create better experiences.


They test the technology multiple times to ensure it matches what they observe. And they have all the necessary context about their event.

In these instances, there is nothing wrong with trusting the data to make informed adjustments in real time.

Concern 5: Cameras may get hacked

Our service goes through the same vetting process as all other vendors, including third-party penetration tests and audits.

Hence, even though the statement is factually correct, it holds equally true for the laptops we use every day, our smartphones, and all CCTV systems.


If the true concern is security and we apply the same standards to everything, then everything must go.

With this fearful logic, organizers should start collecting attendees' phones at the entrance and remove the CCTV equipment from venues.

They should also terminate AV companies that stream content, including pointing cameras at the audience and drop all registration companies.

After all, hacking a registration company is more damaging than gaining access to aggregated and anonymized data.

Concern 6: The scope of surveillance will increase

The concern we all share is bad actors that do not comply with privacy regulations. But it is safe to use products with built-in privacy safeguards.


One of the worries expressed was about other computer vision solutions, such as our new badge scanning solution.


It detects QR codes up to 6–7 feet from the camera. The service requires explicit consent before data is tied to a specific individual.


There are also easy opt-in/out mechanisms to offer peace of mind. It is no different than RFID and BLE used in events for decades.


It is no different from manual badge scanning for lead retrieval, access control, and assigning CEU credits.


These products and tools are necessary for event organizers to be able to compete with digital marketing and design better experiences.

Concern 7: Informed consent

Some people call for mandatory consent requirements for all services — even the ones that do not collect personally identifiable information.


But that will result in an effective ban on numerous technological advancements. And the rhetorical question is — to what end?


If one insists on that (opinions are a right for all), they should also suggest an alternative solution to offset the cost with an equal or greater benefit.


Until then, there is consensus among institutions and practitioners that this is unnecessary because there is no risk to data privacy.

Summary

Privacy regulations offer unambiguous guidelines about identifiable versus non-identifiable data, including requirements around consent and notice.


Security frameworks are also universally accepted and they minimize risk while increasing trust in technology products.


A solution that is both accurate and implements privacy by design is safe to use. It offers a net positive effect in the events industry and beyond.


This is why the community supports those who deploy Ethical AI — a term defined by established processes, public policies, and facts.

Clarifications

The facial recognition offering referenced on our website is a legacy feature not deployed at the event.


It has been mentioned that Zenus offers racial recognition, but this is not true. It was a suggestion on how one could measure inclusivity.


When there is a short time for a session, it is impossible to address everything in detail. Instead, the goal is to start a conversation.


It looks like we can say — mission accomplished!



*Original blog was posted here.



83 views0 comments
bottom of page