Young Minds Eye Foundation


ENVISION YOUR FUTURE


  • Home
  • Our Story
  • Our Services
    • Young Minds Academy
    • YM Talk Buddies
    • Young Minds Hub
  • The team
    • Board Members
    • Executive Members
  • Volunteer
  • Contact Us

Don’t Believe Everything You See/Hear



By: Marion Desplenter



“There are two ways to be fooled. One is to believe what isn’t true; the other is to refuse to believe what is true.” Danish Philosopher, Soren Kierkegaard 1813-1855


After hearing numerous news articles about AI and Deepfakes, I wanted to look into this more. I know that Artificial Intelligence is impacting our lives more and more. AI is currently used in facial recognition, chatbots, entertainment, social media, transportation, navigation, gaming, education, medicine, smart home devices ... the list goes on and on. There are plenty of benefits to this advanced tech but how do we guard against being duped by malicious intent?


What are Deepfakes? The phrase was coined in 2017 and is a mashup of “deep learning” and “fake”. Deep learning is a method used to teach computers to process data through AI neural networks, mimicking the learning process of the brain. Current AI can modify any media (image, video, or audio) making it closely resemble the original. With today’s AI advancements, AI and humans can rarely differentiate between the original or a deepfake version.


Common examples of deep fakes include:

  • Face swapping - replacing one face with another
  • Lip-syncing - replacing the audio in a video with different audio and making the speaker’s lips sync with the new audio
  • Full-body animation - replacing the body of one person (person A) in a video with another person’s body (person B) to create the appearance of person A behaving however person B did.

Exciting possibilities can become a reality when deepfakes are used for creative expression or entertainment. Artists and creators can generate realistic videos by merging different faces or voices. Actors can play younger or older versions of themselves. Deepfakes can be used to enhance special effects in movies, to resurrect deceased actors or to create new characters using existing footage.


In the area of education, realistic training scenarios could be created for medical students to use while practicing surgeries or for pilots while simulating emergencies. Educational videos could be used to learn about historical figures. Imagine sitting down to watch a historical figure, like Albert Einstein or Jacques Cousteau “speak” about their experiences.


Of course, we have to be vigilant about the possible misuse of deepfakes. They could be used to create very convincing fake news videos and to spread false information which could pose a significant threat to public trust in news media, evidence and legal proceedings. It’s not a stretch to imagine a deepfake video of a political leader making inflammatory statements about another country or person to sway opinions or at worse, incite violence. National security could be adversely impacted by swaying public opinion during an election campaign.


Privacy and blackmail are also a huge concern since anyone’s face can be superimposed onto explicit or compromising content. In addition, modified videos could be used to extort money or seriously damage someone’s reputation.


Many celebrities have been targeted with deepfakes. For example, Canadian entertainers Michael Buble and Rick Mercer, TV chef Mary Berg, and hockey star Sidney Cosby have had their images used to entice people to invest money in bogus scams or crypto-currency schemes.


What could you or I do, if someone took an image of us from social media and created a deepfake video, then shared it to generate a false representation of us? Pablo Tseng, a lawyer in Vancouver who specializes in intellectual property at McMillan LLP, says "You are in control of how you should be presented. The law will still see this as a wrong that's been committed against you. Of course, the question is: Do you think it's worth your while to pursue this in court?"


The U.S. has new legislation regarding deepfakes (Deep Fakes Accountability Act) while Canada currently has no laws that specifically address deepfakes. However, according to Pablo Tseng, in a civil case existing torts (laws created by judges to provide compensation for damages caused by wrongdoing) could be used in a lawsuit that involves AI-generated deepfakes.


The Canadian federal government is developing legislation around the use and regulation of AI in Canada. Bill C-27, the Digital Charter Implementation Act. Yoshua Bengio is a Canadian computer scientist known for his work on artificial neural networks as well as deep learning. He attended a recent meeting regarding Bill C-27 and recently encouraged members of Parliament to immediately pass this bill in the legislature, regardless of how perfect it is.


“There’s real concern about the use of AI in politically oriented ways that go against the principles of our democracy,” he told attendees of the meeting. Bengio claimed that it’s “very important to cover general-purpose AI systems because they’re also the ones that could be the most dangerous if misused.”


Catherine Régis, a professor at the Université de Montréal, also attended the committee meeting and wants the government to act urgently. She stated that “AI regulation is a global effort, and Canada must figure out what to do at the national level if it wants to have a voice” at the global level.


Educating yourself and others about the potential risks of AI-generated deepfakes is crucial to navigating this quickly evolving tech. Protect yourself online. While not foolproof, adopting a cautious approach regarding shared images and videos online is a simple way to avoid being duped. There are tips available online about protecting your photos from being stolen by others and fact checking what you see online (see the helpful links below).


Most importantly, think twice about sharing images or videos. Especially if they might damage a person’s reputation or cause them harm.


Helpful links:

https://mediasmarts.ca/break-fake

https://www.copyrighted.com/blog/image-protection-online

https://mediasmarts.ca/teacher-resources/break-fake-how-tell-whats-true-online

https://www.canada.ca/en/campaign/online-disinformation.html?utm_campaign=csec-ctsc-online -disinformation-23-24&utm_medium=sem&utm_source=ggl&utm_content=ad-text-en&utm_term =deep%20fake&adv=2324-472201&id_campaign=20556416294&id_source=156443934711&id _content=685181018586&gclid=CjwKCAiAq4KuBhA6EiwArMAw1A2bamonWbTo8MbzmSCpM pKIlbrUWhct_5l7r45IS0kbjSnHkzdGUBoCcXIQAvD_BwE&gclsrc=aw.ds


Resources:

https://www.cbc.ca/news/canada/deepfake-ai-scam-ads-1.7104225

https://www.cbc.ca/news/politics/ai-pioneer-canada-needs-law-to-regulate-ai-now-1.7105463 https://www.cbc.ca/news/canada/deepfake-ai-scam-ads-1.7104225#:~:text=TV%20chef%20Mar y%20Berg%2C%20crooner,of%20catching%20up%20to%20do.

https://www.unit21.ai/fraud-aml-dictionary/deepfake#:~:text=An%20example%20of%20a%20wid espread,a%20crime%20in%20many%20places

https://interestingengineering.com/innovation/deepfakes-are-bad-but-what-are-some-of-the-pos sible-advantages



YME Volunteer 2024



"Hands down, one of the best educational non-profits that impact underprivileged youth, in a manner very few other organizations are able to do so"