-\begin{description}
- \item[Set up Cinelerra] A Cinelerra render farm is organized into a master node and any number of client nodes. The master node is the computer which is running the gui. The client nodes are anywhere else on the network with Cinelerra installed and are run from the command line. Before you start the master node for Cinelerra, you need to set up a shared filesystem on the disk storage node as this is the node that will have the common volume where all the data will be stored.
- The location of the project and its files should be the same in the client computers as in the master computer and to avoid problems of permissions, it is better to use the same user in master and clients.
- For example, if you have the project in \texttt{/home/<user>/project-video} you must create the same directory path on the clients, but empty. Sharing the directory of the location of your project on the master computer can be done with NFS as described next. Alternatively, you can look up on the internet how to use Samba to share a directory.
- \item[Create a shared filesystem and mount using NFS] All nodes in the render farm should use the same filesystem with the same paths to the project files on all of the master and client nodes. This is easiest to do by setting up an NFS shared disk system.
- \begin{enumerate}
- \item On each of the computers, install the nfs software if not already installed. For example, on Debian 9
- you will need to run: (be sure to check/verify before using any command line):
- \begin{lstlisting}[language=bash,numbers=none]
-$ apt-get install nfs-kernel-server
- \end{lstlisting}
- \item On the computer that contains the disk storage to be shared, define the network filesystem. For
- example to export \texttt{/tmp}, edit the \texttt{/etc/exports} file to add the following line:
- \begin{lstlisting}[language=bash,numbers=none]
-192.168.1.0/24(rw,fsid=1,no_root_squash,sync,no_subtree_check)
- \end{lstlisting}
- \item Next reset the exported nfs directories using:
- \begin{lstlisting}[language=bash,numbers=none]
-$ exportfs -ra
- \end{lstlisting}
- and you may have to start or restart nfs:
- \begin{lstlisting}[language=bash,numbers=none]
-$ systemctl restart nfs
- \end{lstlisting}
- \item Each of the render farm computers must mount the exported nfs target path. To see the exports
- which are visible from a client, login as root to the client machine and keyin:
- \begin{lstlisting}[language=bash,numbers=none]
-$ showmount -e <ip-addr> #using the ip address of the storage host
- \end{lstlisting}
- \item to access the host disk storage from the other computers in the render farm, mount the nfs export on
- the corresponding target path: (be sure to check/verify before using any command line):
- \begin{lstlisting}[language=bash,numbers=none]
-$ mount -t nfs <ip-addr>:/<path> <path>
- \end{lstlisting}
- where \texttt{<path>} is the storage host directory, and \texttt{<ip-addr>} is the network address of the storage host.
- Because all of the computers must have the same directory path, create that same directory path with the same uid/gid/permissions on each storage client computer ahead of time.
- \item To make this permanent across reboots on the client nodes, add the following line to \texttt{/etc/fstab}:
- \begin{lstlisting}[language=bash,numbers=none]
-{masternode}:/nfsshare /mnt nfs defaults 0 0
- \end{lstlisting}
- You can make this permanent on the disk storage host BUT the command lines shown, which were
- correct in January 2018 on Fedora, may be different for your operating system or in the future. In
- addition if your network is not up, there may be numerous problems. If you make a mistake, your
- system may not boot. To make permanent, add the following line to \texttt{/etc/fstab}:
- \begin{lstlisting}[language=bash,numbers=none]
-192.168.1.12:/tmp /tmp nfs rw,async,hard,intr,noexec,noauto 0 0
- \end{lstlisting}
- You will still have to mount the above manually because of the \textit{noauto} parameter but you won’t
- have to remember all of the other necessary parameters. Depending on your expertise level, you can
- change that.
-
- Later, to remove access to the storage host filesystem:
- \begin{lstlisting}[language=bash,numbers=none]
-$ umount <path>
- \end{lstlisting}
-
- Be aware that you may have to adjust any security or firewalls you have in place. \textit{Most firewalls will require extra rules to allow nfs access}. Many have built-in configurations for this.
- \end{enumerate}
- \item[Configure Rendering on Master Node] There is 1 master node which is running the Cinelerra gui and where the video will be edited and the command given to start up the rendering. Any number of client computers can be run from the command line only, so they can be headless since no X or any graphical libraries are needed. Of course, the Cinelerra software must be installed on each of the client computers.
- \begin{enumerate}
- \item Assuming you already have Cinelerra installed on the master node, start Cinelerra by clicking on the
- icon or by typing the following command on the terminal screen: \texttt{/{cinelerra\_path}/cin}.
- \item Use the file pulldown \texttt{Settings $\rightarrow$ Preferences}, the Performance tab, to set up your Render Farm
- options in the Render Farm pane.
- \item Check the \textit{Use render farm} option. By default, once you enable the option of Render Farm, rendering is usually done using the render farm. Batch rendering can be done locally, or farmed.
- \item Add the hostname or the IP address of each of the client nodes in the Hostname textbox and the port
- number that you want to use in the Port textbox. You can make sure a port number is not already in
- use by keying in on the command line:
- \begin{lstlisting}[language=bash,numbers=none]
-$ netstat -n -l -4 --protocol inet
- \end{lstlisting}
- Next, click on the \textit{Add Nodes}
- button and then you will see that host appear in the Nodes list box to the right. The \texttt{X} in the first
- column of the nodes box denotes that the node is active. To review the \textit{standard} port allocations,
- check the \texttt{/etc/services} file.
- \item Enter the total jobs that you would like to be used in the \textit{Total job} textbox.
- \item The default watchdog timer initial state is usually just fine but can be adjusted later if needed.
- \item Click OK on the Preferences window when done.
- \end{enumerate}
- \item[Create Workflow] While working on the master computer, it is recommended that you keep all the resources being used on the same shared disk. Load your video/audio piece and do your editing and preparation. Add any desired plugins, such as a Title, to fine-tune your work. You want to make sure your video is ready to be rendered into the final product.
- \item[Start the Client Nodes] To start up the client nodes run Cinelerra from the command line on each of the client computers using the following command:
- \begin{lstlisting}[language=bash,numbers=none]
-/{cinelerra_pathname}/cin -d [port #] ; \#for example /mnt1/bin/cinelerra -d 401
- \end{lstlisting}
- This starts Cinelerra in command prompt mode so that it listens to the specified port number for commands from the master node for rendering. When you start each of the clients up, you will see some messages scroll by as each client is created on that computer, such as:
- \begin{lstlisting}[language=bash,numbers=none]
-RenderFarmClient::main_loop: client started
-RenderFarmClient::main_loop: Session started from 127.0.0.1
- \end{lstlisting}
- As it completes its jobs, you will should see:
- \begin{lstlisting}[language=bash,numbers=none]
-RenderFarmClientThread::run: Session finished
- \end{lstlisting}
- A quick way to start a sequence of clients is to use:
- \begin{lstlisting}[language=bash,numbers=none]
-or n in `seq 1501 1505`; do cin -d $n; done
- \end{lstlisting}
- \item[Render Using Render Farm] After you have followed the preceding steps, you are ready to use the render farm. Click on \texttt{File $\rightarrow$ Render}\dots which opens the render dialog. The most important point here is to use for \textit{the Output path / Select a file to render to} a path/file name that is on the shared volume that is also mounted on the clients. Click on OK to render. The Cinelerra program divides the timeline into the number of jobs specified by the user. These jobs are then dispatched to the various nodes depending upon the load balance. The first segment will always render on the master node and the other segments will be farmed out to the render nodes. Batch Rendering, as well as BD/DVD rendering, may use the render farm. Each line in the batchbay can enable/disable the render farm. Typically, video can be rendered into many file segments and concatenated, but normally audio is rendered as one monolithic file (not farmed).
-
- Another performance feature which can use the Render Farm is \textit{Background Rendering}. This is also enabled on the \texttt{Preferences $\rightarrow$ Performances} tab. The background render function generates a set of image files by pre-rendering the timeline data on the fly. As the timeline is update by editing, the image data is re-rendered to a \textit{background render} storage path. The Render Farm will be used for this operation if it is enabled at the same time as the \textit{background render} feature.
- \item[Assemble the Output Files] Once all of the computer jobs are complete, you can put the output files together by using the shell script, \textit{RenderMux} (from the menubar \textit{scripts} button just above FF), if the files were rendered using ffmpeg, or you can load these by creating a new track and specifying concatenate to existing tracks in the load dialog in the correct numerical order. File types which support direct copy can be concatenated into a single file by rendering to the same file format with render farm disabled as long as the track dimensions, output dimensions, and asset dimensions are equal.
-\end{description}