Jungen Koimizu, Toshiaki Numajiri, Kazuto Kato
Machine learning is expected to help humans evolve even in the field of plastic surgery.However, plastic surgeons must be aware that the artificial intelligence (AI) could create a biased view on patients, instead of promoting objectivity.
One comprehensive example is the AI for measurement of facial attractiveness based on semisupervised machine learning.At the beginning of the procedure, researchers give scores of attractiveness to portraits of famous actors and actresses as a group of the most beautiful faces. Subsequently, AI memorizes the combinations of the score and photograph as a sample dataset. In addition, the AI memorizes the least attractive portraits prepared by computer simulation. Based on this learning of the most and least attractive face, the AI predicts the attractiveness of the other faces.
The researchers insist that this AI can predict the attractiveness of a “moderate” face, for which people would give variable scores based on their preferences, and that it allows us to evaluate beauty objectively.
However, whether such measurement collating AI is ethically sound is questionable. One of the ethical reservations is the possible failure in shared decision-making.In fact, AI could propose a biased view, attributed to the bias in the values and the dataset on which AI is based. First, it is impossible to be perfectly free from biases when individuals score attractiveness, as a common and objective scale of attractiveness is not available. Second, existing databases employed in the researches on attractiveness have biased ethnicity and gender ratio,which could lead to a biased prediction. Therefore, if our clinical practice relied on the AI, it would force the biased perspective of an individual (neither the surgeon nor the patients) onto patients. It could disturb the shared decision-making, which is the process for respecting the autonomy of the patients.
To read the full article: bit.ly/2T8UpJc
doi.org/10.1097/GOX.0000000000002162









