Yes, there is certainly a lot of fear surrounding the issue of deepfake videos and how they can have catastrophic consequences. But whether you wish to get on board the acceptance train or decide to stay back on the platform in apprehension, gaining a little more knowledge on the subject will cause no harm.

Deepfake is not inherently evil, in fact, it is one of the most brilliant developments in the tech world that is causing serious levels of disruption. Once you see the potential of GAN technology and realize that it can actually be used productively for the good of mankind, deepfake will not seem as threatening as it does right now.

As Yoda says, ‘Fear is the path to the Dark Side. Fear leads to anger, anger leads to hate, hate leads to suffering.”

Deepfake has a flipside, a rather dark one too. But the more you familiarize yourself with this concept and technology, the less control it will have over you. In fact, you can even explore the positive potential of deepfakes and turn it into an asset instead of a liability.

Right now, political institutions are taking strong action (justifiably) to remove all access to deepfake software available online. While this makes it difficult to access it, we did find one software that is still available. (but for how much longer?)

Things to know before making a deepfake video

If you have an intermediate knowledge of computers and have any experience with software at all, making a deepfake video won’t be that difficult. The whole premise is built on the fact that the software will do the grunt work and all you have to do is input the data and execute the software. There is audio software that requires outputs to be written as well but for this instructional, we will focus more on face morphing.

Before we proceed, keep in mind that you cannot use anyone’s picture without their permission and you can most definitely not publish it in the public domain. This includes any kind of video, photo, or even audio that could potentially violate the basic right of any human being and cause them harm. Celebrities and famous personalities included.

Another important aspect to keep in mind is that deepfaking is an extremely popular method used to promote political propaganda and chaos, we recommend keeping away from such content to avoid any trouble.

The reason we are being so forceful about what is acceptable isn’t just that the misuse of deepfake software has become an international crisis. Fear is already being invoked in people about this software and its potential.

But it’s like this, those who mean harm and intend to cause chaos will find another avenue if not this one. But the more we are aware of how deepfake works and the less power it will have over you.

Requirements

Creating a deepfake video is not possible without having the right hardware support. Make sure you are equipped with the following before moving ahead.

Hardware

  • A modern CPU with 8 GB RAM should work; more RAM preferred for faster processing
  • An Nvidia or AMD graphics processor with 2GB RAM or more
  • Windows 7 and up
  • Intel Core i5 and up

Software

  • For this instructional, we recommend getting the DeepFaceLab software from GitHub.
  • Some NVIDIA graphics cards require CUDA Toolkit installed. You can download them for your device from here.

Source and destination video requirements

  • Both source and destination videos should be of high resolution, 4K is preferable but 1080p videos should make do. 
  • The videos should be bright enough for individual faces to be visible. 
  • The faces you’re about to swap should have some similar traits – skin color, hair, beard, and accessories like glasses and hats. 
  • The face that needs to be extracted and replaced shouldn’t be too far away into the video but it shouldn’t be a close-up shot either. 
  • The video should feature the 2 faces across various angles and expressions. 
  • For the software to map out the desired face, it should be long enough to scan it fully ~ 2-3 minutes. 
  • Videos, where the subject isn’t moving around, should be easier to modify. 

To get the best results, we recommend using videos from interviews as they give you a somewhat close shot of a subject without moving them around the frame. The software will find it easier to scan and extract facesets from such videos without compromising much on the video’s quality. 

How to make a deepfake video

There are 3 major aspects where the software comes in. Extraction of data, training of the neural network and merging of the source file and destination file. The video has three major aspects that are interconnected. The quality of the video, the amount of time you decide to invest in the training, and the duration of the video itself. If you want the result to have good quality, you will have to invest the necessary time required for the neural networks to train for the video. Also, if the video you wish to create is of long duration, the quality will be compromised in some way or another.

Keep in mind that there is one thing you will be giving up on the first try and it will mostly be the quality. The more you work with the software, the easier it will be to achieve the results you desire.

Part 1: Extraction

  • Download the DeepFaceLab software torrent by looking for your platform under “Releases” and then selecting a suitable build that goes with your graphics card. 
  • After you’ve downloaded the desired .EXE file, double-click on it on your desktop to extract the files on your device.
  • Once extracted, you will see two folders, internal and workspace along with a number of batch files. The ‘workspace’ folder is what you would need to access at all times during the conversion process. When you enter this folder, you will see two .MP4 video files – “data_dst” and “data_src”. The first one is the destination video file that you want to copy the replaced face to and the second one is the source video file from which you want to extract the face to be pasted on to the destination video. 
  • For now, let’s just go back to the folder where you extracted all the original .EXE file, and from here, select the clear workspace batch file. This is where you will be saving your files.
  • After you’ve cleared the workspace to create a new one, place your source and destination videos inside the ‘workspace’ folder and rename the original video that will be the final product as data_dst and the video that will be used to replace the face as data_src.
  • Now, you have to separate the frames of both videos into separate folders by following the next steps. 
  • Run extract images from video data_src, the console will open to start the extraction process for the source file.
  • Run extract images from video data_dst FULL FPS next to repeat the same process for the destination file.
  • The console will close automatically once the frames have been extracted.

Part 2: Face extraction

  • Run data_src faceset extract.
  • When the console asks for your face type, enter “wf“. 
  • Run data_src view aligned result to check the quality of the frames extracted. You will now see the aligned folder where you can see the extracted facesets from the source video inside the data_src folder. If there are frames that look disoriented or you feel are unnecessary for your usage, delete them from this folder. 
  • Repeat this process for the data_dst video by running data_dst faceset extract
  • When the console asks for your face type, enter “wf“. 
  • Run data_dst view aligned results to view the frames that have been extracted from the destination video. You will now see the aligned folder where you can see the extracted facesets from the source video inside the data_dst folder. If there are frames that look disoriented or you feel are unnecessary for your usage, delete them from this folder. 
  • You can also manually mask faces from your source and destination videos for better results in the end. 
  • To mask the facesets from the destination video, run data_dst mask — edit. The console that loads up will let you set boundary points to map the facesets individually. When masking, make sure you’re only mapping the interior of the face that needs to be masked and avoid getting anywhere close to the person’s hair. 
  • Similarly, to mask the facesets from the source video, run data_src mask — edit

Part 3: Training

  • Run the train H64 batch file if you are a beginner(expect quality compromise)
  • Once the console opens, press Enter/Return to select all the default options.
  • The model will start loading and display information like session options and the size of the data set.
  • The preview window will open once the model has finished loading. Observe the frame-by-frame training process.
    • The best way to gauge the success of the training is by looking at the loss value of the file. Anything less than 0.2 should do.
  • Press Enter to end the training once you are satisfied with the results.

Part 4: Merge and Convert

  • Run convert H64 for converting the H64 model from the batch files.
  • Select all the default options using enter/return.
  • A new folder called merge will be created in the data_dst folder.
  • Now run the converted to mp4 batch file.
  • Look for result.mp4 inside the ‘workspace’ folder. That’s the final deepfake video you set to create.

That’s all you need to know to create a deepfake video yourself.