Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

You it’s me, standby for deepfaked traffic, over.

Robert Dulin

The most cost efficient (low barrier to entry, reasonable probability of success, and a high reward) threat posed to the Department of Defense (DOD) is the ability of Artificial Intelligence to get inside the OODA (Decision) loop of field grade officers and senior enlisted advisors.

Former Commandant of the Marine Corps General Al Gray was known for many “Grayisms”, but one that always comes to mind is “if you want a new idea, read an old book.” (Hamid, 2015, #)  In many ways Artificial Intelligence’s (AI) “wisdom” falls in line with this thinking; using new technology in an old way.  For those who have watched the HBO series Band of Brothers, a humorous example of this may help to better explain.  During a training exercise in the English countryside, the company commander of Easy Company, Captain Sobel, finds that he has led his paratroopers to the edge of a pasture with a barbed wire fence, and more importantly, they were in the wrong location.  While Captain Sobel attempts to correct the situation, one of the paratroopers of Easy company, Sergeant Luz, decides to teach the lost Captain a lesson.  Flawlessly, Luz begins to impersonate the battalion executive officer, Major Horton, from behind some bushes and convinces Captain Sobel to cut the barbed wire fence in order to get the company to the correct rally point.  Later, while attempting to shift the blame for his decision to cut the fence, Captain Sobel finds himself on the receiving end of a “dressing down” for his actions and claims he was ordered to cut the fence by Major Horton only to find out that the Major was on leave.  Though the story brings a smile, it is an apt demonstration of how in the right context a military leader can be deceived into executing an action based on a convincing imitation (Robinson, 2001). With the advent of AI and deep fakes such disinformation operations can scale exponentially both in quantity and quality. 

After the failures of the Vietnam War, Air Force Fighter Pilot John Boyd famously developed the Ooda Loop (Observe Orient Decide Act), first appearing around 1980, to describe his decision making process while dogfighting in an aircraft.  He explained how continuous repetitions of an exercise would decrease the length of time between sections of the OODA loop and thus the entire process time would decrease (Phillips, 2021).  Subsequently, his decision making model was unofficially adopted by the Marine Corps and the rest of the joint force to teach service members how to “think” during combat and other times where rapid and accurate decision making yields a competitive advantage.  We must continue to decrease our service members cycle-time and the entire United States Government (USG)’s joint and interagency systems in the 21st century will require the aid of Artificial Intelligence in the same way we needed Logic Chips to win Boyd’s Magnum Opus, the First Gulf War (John Boyd (Military Strategist), n.d.).

AI has the potential to be an unprecedented force multiplier for disinformation operations.  It can “get inside our OODA loop” and stay there (potentially) forever depending upon our ability to identify and counter it.  The emergence of Deepfakes, (Pasternack, 2023) chatGPT, Tik Tok, and Twitter bots has led to the instant dissemination of convincing human-like disinformation, primarily to the public, but also members of the USG.  What is perhaps most disturbing is the anticipated and unanticipated synergies that these technologies present.  One such example exists between deepfakes and chatGPT.  Normally, when creating a convincing deep fake a fraudster would have to write a convincing script with relevant context.  However, with chatGPT a fraudster can easily generate a convincing enough script and then using Generative Adversarial Networks (GAN) create a relevant, timely deepfake.  Scale up this type of operation, by targeting field grade Officers and Staff Non Commissioned Officers (SNCOs) in operational billets and then flood specific channels with disinformation. This in effect could provide a broad campaign with sophisticated, time-sensitive content, that is dynamically adapting and dictating enemy actions in the OODA loop.  At the same time such a technique in combination with well defined targeting parameters could influence or disrupt key “decision points/makers” ability to execute.  This paper aims to discuss what these new mediums of Artificial Intelligence and Machine Learning are, elaborate on the potential known types of Information Operations (IO) campaigns incorporating said methods, and provide some inoculation strategies / countermeasures to mitigate its effectiveness.

To oversimplify a complicated process, a Generative Adversarial Network, aka GAN, (Gandhi, 2018) is a subset of Machine Learning and Artificial Intelligence where two neural networks are pitted against each other.  “The two models are trained together in a zero-sum game, hence the term adversarial” (Brownlee, 2019).    The first network, known as the generator, tries to trick the second network, the discriminator (or adversarial AI) by creating (generating) false data.  The generator then combines the false data with real data and challenges the discriminator to find the real data.  This cat and mouse game continues with each model improving until the generator creates data so similar to the source that it is very difficult for the discriminator to discern the difference.  To use an analogy:

“We can think of the generator as being like a counterfeiter, trying to make fake money, and the discriminator as being like police, trying to allow legitimate money and catch counterfeit money. To succeed in this game, the counterfeiter must learn to make money that is indistinguishable from genuine money, and the generator network must learn to create samples that are drawn from the same distribution as the training data.” (Brownlee, 2019)

It would not be terribly difficult to imagine a use case where a near peer adversary of the United States develops a strategy to leverage all existing US media channels, CNN, Youtube, Facebook, etc. and then selectively edits portions of broadcasts, stories, and posts with disinformation.  This “death by a thousand cuts” would gradually degrade the confidence of US institutions.  Sound familiar? The 2016 election interference by Russia used similar tactics, but this time the speed, scale, and number of distribution channels for the operation would be greatly accelerated.  Halting the spread of such an operation would be equally troublesome as the design of such a campaign would primarily attack the admittedly weakened trust in many US institutions.

As detrimental as a long-term degradation campaign against the United States would be, a direct operation with a higher return on investment (ROI), well defined measures of performance (MOPs), and cogent measures of effectiveness (MOEs) is to go after specific persons of interest.  Under this model of engagement, adversaries create targeting packages toward USG personnel in operational command billets such as a Battalion Commander or Navy Captain.  Officers in this category would be an ideal target as the level of cyber and IO protection for these personnel would be significantly less than that of more senior military personnel.  If successfully deepfaked, any communications originating from these bizarro actors would  significantly affect/reveal US troop movements, disposition, and ability to conduct operations.  For example, let’s say a bad actor gains access to the personal device of an O5/O6.  Now, any official comms would be through DOD email or NIPR/SIPR.  However, using AI and a GAN (Hansen, 2022) said bad actor could harvest the text and speech patterns of the battalion commander/ Captain.  With the speech pattern data in hand, the bad actor could use chatGPT to effectively mimic the speech/text patterns and craft communications over unsecured (Non NIPR/SIPR) channels which would be perfectly camouflaged so as not to arouse suspicion, yet over the long haul could inflict friction or sow confusion.

Furthermore, imagine a scenario where Chinese actors successfully gained access to the personal cell phone of an infantry sergeant via Tik Tok, let’s call him Sergeant Young.  Capability in hand, the Chinese could send text messages that perfectly match the speech patterns and syntax of Sergeant Young.  How can this affect U.S. Operations?  For one, they could send a text to Sergeant Young’s platoon sergeant/platoon commander asking for clarifying questions related to operational parameters, movements, logistics, or seemingly trivial pieces of information. Upon receiving the requested information, the Chinese would then delete that portion of the conversation from Sergeant Young’s phone.  On an individual level, the informational value gained probably won’t move the needle, but when performed on one-third of all the Non Commissioned Officers (NCOs) in a battalion could develop a very detailed operational landscape.  Perhaps the more disturbing angle in this example is just how low the barrier to entry is for adversarial success.  Anyone with Tik Tok on their phone is a potential primary threat vector for compromising their own device as well as a secondary gateway for access to the devices of others.  The only limiting factor would be timing.  The real kicker for the U.S. is the attack vector described all rely on personal devices which would not be subject to the level of inspection or control in the same way as government issued electronic devices. Thus, it is a very low risk, high reward strategy to implement against the United States.

A similar, yet different means of attack is to use a GAN to deliberately delay progress of active military operations.  Picture this, an unknown actor gains access to the smartphone of a regimental executive officer (XO); using the built in microphone the actor records, analyzes, and synthesizes the XO’s voice and speech inflections/patterns.  This data, coupled with compromised access to a unit’s communications frequency, call signs, and crypto key could allow the unknown entity to send contradictory information or slightly altered operational variables; i.e., rules of engagement (ROE) updates, through the radio to slow down or potentially halt the OODA loop for a deployed unit.  Though it sounds a bit far-fetched, such tactics are currently being utilized by scammers to imitate the voices of college students on spring break and then use that imitated voice to demand a ransom payment (Karimi, 2023).

Where this context is especially concerning is with the littoral operations of the Marine Corps. In this scenario, if the enemy were able to access the logon information for DOD email or via some other means gain access to the simple key loader for a PRC-117 radio they then could use this artificially generated speech to issue commands or request a Situation Report (SITREP) or Position Report (POSREP) in order to proactively gain intel.  Basically, an adversary could execute a man in the middle attack and modify radio comms for an entire battalion both internally and externally.  The adversary could then sit on the network and gather comms info such as speech patterns, inflection points, call signs, troop positions, etc.  At the opportune moment, said actor, could then implement a script instructing chatGPT to issue commands, launch a Distributed Denial of Service (DDOS) attack towards  adjacent units or internally all the while appearing as legitimate traffic.  Doing so would inherently slow or stall the OODA loop of the commander and subordinate leaders.  This principle could also be used offensively in the favor of the United States, especially in an environment such as the South China sea where such a capability could in theory be utilized to disrupt Chinese naval operations and or gain intel.

To go another layer deep, the same attack could be executed via a different attack vector ala an iMessage zero-click exploit (Toulas, 2023).  In such a scenario, Sergeant Young’s infected iPhone would send an iMessage loaded with a zero-click exploit where upon receiving the imessage the recipient’s iPhone is instantly compromised; regardless of whether the recipient opens the message or not. In such a scenario, Sergeant Young’s infected personal iPhone would send an iMessage loaded with a zero-click exploit to a DOD iPhone number from his contact list. Upon receiving the infected iMessage the recipient’s iPhone is instantly compromised; regardless of whether the recipient opens the message or not. This exploit combined with a GAN message and a worm to spread the attack to everyone in the recipient’s DOD iPhone contact list could quickly prove to be a massive operational security challenge.

The Billion dollar question, how can these risks be mitigated?  A simple strategy would be to require the re-authentication of certain communications channels and platforms via the use of a quantum RSA token (Quantum Computers, Spy Balloons, and China’s Endgame | Paul Dabbar, 2023).  This would validate the source of the comms origin and prevent tampering.  Alternatively, go low tech.  Have old school challenge and pass codes which are published and disseminated via paper or some other means prior to deployment, or routinely in garrison.  Perhaps, we require all officers O4 and up and Senior Enlisted Advisors (SEAs) E-8 and up to put apps/software on their personal devices to prevent the type of data collection/aggregation discussed.  Another interesting solution could be to implement Zero Knowledge Proof algorithms to authenticate end users.  Such algorithms are able to validate a user without sending sensitive information (Understanding Zero-Knowledge Proofs, 2022).   Finally, develop and utilize USG trusted GAN and AI models to act as a friendly man in the middle for our comms traffic and validate where necessary.

To conclude, the United States needs to anticipate and begin to inoculate its infrastructure and personnel against these threats.  Not only will deep fake tech make compromises of the DOD networks harder to detect, but also once a DOD network’s perimeter is initially breached, deepfake AI will enable an exponentially explosive exploitation of the breach. The most effective probable solution or solutions are some combination of the above recommendations as well as a zero-trust mindset.  Time is our greatest factor; if we do not start now, the costs may be detrimental.  The last thing we want is to end up like Captain Sobel, lost in a field, confused, and easily convinced to pursue a course of action that is not in our best interest or the interests of our people.

References

Barak, B. (2016). Zero Knowledge Proofs. https://www.boazbarak.org/cs127spring16/chap14_zero_knowledge.pdf

Brownlee, J. (2019, June 17). A Gentle Introduction to Generative Adversarial Networks (GANs) – MachineLearningMastery.com. Machine Learning Mastery. Retrieved July 21, 2023, from https://machinelearningmastery.com/what-are-generative-adversarial-networks-gans/

Gandhi, R. (2018, May 10). Generative Adversarial Networks — Explained | by Rohith Gandhi. Towards Data Science. Retrieved July 21, 2023, from https://towardsdatascience.com/generative-adversarial-networks-explained-34472718707a

Gourley, B. (2022, February 26). John Boyd on Patterns of Conflict and the OODA Loop. OODA Loop. Retrieved July 25, 2023, from https://www.oodaloop.com/archive/2022/02/26/john-boyd-on-patterns-of-conflict-and-the-ooda-loop/

Hamid, T. (2015). Grayisms: Implications for Future Policy and Conduct (Y. Alexander, E. H. Brenner, & D. Wallace, Eds.). Potomac Institute for Policy Studies.

Hansen, C. (2022, July 21). Generative adversarial networks explained. IBM Developer. Retrieved July 21, 2023, from https://developer.ibm.com/articles/generative-adversarial-networks-explained/

John Boyd (military strategist). (n.d.). Wikipedia. Retrieved July 20, 2023, from https://en.wikipedia.org/wiki/John_Boyd_(military_strategist)#cite_ref-17

Karimi, F. (2023, April 29). AI scam calls: This mom believes fake kidnappers cloned her daughter’s voice. CNN. Retrieved July 20, 2023, from https://www.cnn.com/2023/04/29/us/ai-scam-calls-kidnapping-cec/index.html

Marcus, G. (2023, May 12). The Urgent Risks of Runaway AI – and what we Do about Them [TED Talk about AI] [You Tube]. You Tube. https://www.youtube.com/watch?v=JL5OFXeXenA

Pasternack, A. (2023, February 22). Deepfakes are getting smarter with ChatGPT, sparking funding and fears. Fast Company. Retrieved July 21, 2023, from https://www.fastcompany.com/90853542/deepfakes-getting-smarter-thanks-to-gpt

Phillips, M. S. (2021, October 4). DAU News – Revisiting John Boyd and the OODA Loop in Our Time of Transformation. DAU. Retrieved July 21, 2023, from https://www.dau.edu/library/defense-atl/blog/revisiting-john-boyd

Quantum Computers, Spy Balloons, and China’s Endgame | Paul Dabbar. (2023, February 16). Hold These Truths with Dan Crenshaw. Retrieved July 21, 2023, from https://holdthesetruthswithdancrenshaw.libsyn.com/quantum-computers-spy-balloons-and-chinas-endgame-paul-dabbar

Robinson, P. (Director). (2001). Currahee (Season 1, Episode 1) [TV series episode]. In Band of Brothers. MAX.

Toulas, B. (2023, June 1). Russia says US hacked thousands of iPhones in iOS zero-click attacks. Bleeping Computer. Retrieved July 31, 2023, from https://www.bleepingcomputer.com/news/security/russia-says-us-hacked-thousands-of-iphones-in-ios-zero-click-attacks/

Understanding zero-knowledge proofs. (2022, July 23). Avestura. Retrieved July 21, 2023, from https://avestura.dev/blog/zero-knowledge-proofs

Yasar, K. (n.d.). What is a Generative Adversarial Network (GAN)? | Definition from TechTarget. TechTarget. Retrieved July 21, 2023, from https://www.techtarget.com/searchenterpriseai/definition/generative-adversarial-network-GAN

The post You it’s me, standby for deepfaked traffic, over. appeared first on Key Terrain Cyber.


This post first appeared on Key Terrain Cyber, please read the originial post: here

Share the post

You it’s me, standby for deepfaked traffic, over.

×

Subscribe to Key Terrain Cyber

Get updates delivered right to your inbox!

Thank you for your subscription

×