'Skull Sounds' Offer New Form of Password

biometrics skullconduct skull google glass
A model wears a pair of Google Glass smartglasses in New York, February 7, 2014. The device has been used by researchers to develop a new form of biometrics based on a user's skull. Rommel Demano/Getty Images

First there was fingerprint identification, then came brainprints, now researchers have developed a new form of biometrics using the buzz generated when sound passes through a skull.

The SkullConduct system—developed by researchers at the University of Stuttgart, Saarland University and the Max Planck Institute for Informatics—identifies individuals by the conduction of sound through their skull in order to identify them using smartglasses or VR headsets.

By integrating it with a pair of Google Glass smartglasses, the researchers were able to test it on 10 participants and identify the individuals with 97 percent accuracy.

"If recorded with a microphone, the changes in the audio signal reflect the specific characteristics of the user's head," the report, published in Journal of the ACM, states.

biometrics skull skullconduct password security
SkullConduct uses the bone conduction speaker and microphone readily integrated into the eyewear computer and analyses the characteristic frequency response of an audio signal sent through the user’s skull. ACM

"Since the structure of the human head includes different parts such as the skull, tissues, cartilage, and fluids and the composition of these parts and their location differ between users, the modification of the sound wave differs between users as well."

The system is yet to be tested in an environment that has background noise, so it is unlikely to replace other biometric systems anytime soon.

The researchers did however claim that the passive nature of SkullConduct as a form of identification means it holds benefits over other systems and could therefore potentially find use in the real world.

The study concludes: "While other biometric systems require the user to enter information explicitly (e.g., place the finger on a fingerprint reader), our system does not require any explicit user input."

SkullConduct will be presented at the Conference for Human-Computer Interaction in San Jose, California, next month.