Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

QCM2 Distinguishes Between Real and Fake Images

Generative Artificial Intelligence has the capability of generating fakes that are indistinguishable from the real thing. Clearly, in order to say that, we must have the real thing as a point of reference. Not always we do. Suppose that one generates the image of a “new” Painting by Leonardo da Vinci, a painting that supposedly hasn’t been discovered yet. Clearly, we’re speaking of an image of a painting, not an actual painting on a canvas. In such a case, it would be up to the experts to say if it is potentially a real Leonardo or not. Not always do we have the luxury of expert opinion.

Let’s look at an easier case – that of two images of a painting, one known to be genuine, the other known to be a very high-quality copy, obtained from the original by microscopic modifications. Suppose it is the case shown below:

The QCM2 algorithm has been used to process both images. However, a new image pre-processing Technique – the LPG technique – has been developed, which amplifies certain image features prior to QCM2 processing. The technique is proprietary. An example of a pre-processed image is illustrated below:

It is this image that is fed into the QCM2 algorithm. The pre-processed images are mapped onto Complexity Maps. These are still very similar but the differences between the two are now amplified.

The degree of similarity between the two maps, hence the images, is 99.78%.
The separation is 0.22% and is sufficient to say that the images are not
identical.

The above technique is currently being tested on medical images, such as MRI or CAT scans.



This post first appeared on Quantitative Complexity Management By Ontonix, please read the originial post: here

Share the post

QCM2 Distinguishes Between Real and Fake Images

×

Subscribe to Quantitative Complexity Management By Ontonix

Get updates delivered right to your inbox!

Thank you for your subscription

×