\begin{enumerate}[nosep]
\item unpack/patch source code: \\
- \texttt{git clone --depth 1 ``git:/{\dots}/target'' cinelerra5} \\
+ \texttt{git clone -{}-depth 1 ``git:/{\dots}/target'' cinelerra5} \\
\texttt{./autogen.sh}
\item configure build:\\
\texttt{./configure --with-single-user}
\item \texttt{./configure} {\dots}
\end{itemize}
-Specific information on using the current ffmpeg GIT repository follows. You have to supply the actual URL location of the ffmpeg git as you can see in this example \texttt{bld.sh} script:
+\section{Experimental Builds}
+\label{sec:experimental_builds}
+\index{build!experimental}
+
+The main compilation we have seen leads to building \CGG{} with its own internal ffmpeg, which includes its stability and feature patches. Normally ffmpeg is updated to version x.1 of each release. The reasons why it is best to use thirdparty with their static versions of the libraries and ffmpeg are explained in detail in \nameref{sec:latest_libraries} and in \nameref{sub:unbundled_builds}.
+
+You can still compile \CGG{} with ffmpeg-git to enjoy the latest version. This build may lead to feature variation and less stability, but in most cases will work perfectly fine.
+You have to supply the actual URL location of the ffmpeg git as you can see in this example \texttt{bld.sh} script:
+
+\begin{lstlisting}[numbers=none]
+ #!/bin/bash
+ ./autogen.sh
+ ./configure --with-single-user --with-booby --with-git-ffmpeg=https://git.ffmpeg.org/ffmpeg.git
+ make && make install ) 2>1 | tee log
+ mv Makefile Makefile.cfg
+ cp Makefile.devel Makefile
+\end{lstlisting}
+
+Since the procedure for obtaining the latest ffmpeg version is not always kept up-to-date and the line numbers will always change, you may have to create some patch first. Generally those line numbers are only updated by a developer when a new stable version with worthwhile features is actually included in the \CGG{} build. FFmpeg is constantly changing and many times the git version is not as stable as desired.
+
+Finally, it is possible to compile \CGG{} so that it uses ffmpeg which is already installed on the system. This build takes less time to compile and may increase performance in both rendering and timeline manipulation. Again, there may be variations in functionality and less stability.
+Getting a build to work in a system environment is not easy. If you have already installed libraries which are normally in the thirdparty build, getting them to be recognized means you have to install the devel version
+so the header files which match the library interfaces exist. If you want to build using only the thirdparty libraries installed in your system, just include \texttt{-–without-thirdparty} to your configure script. For example:
\begin{lstlisting}[numbers=none]
-#!/bin/bash
-./autogen.sh
-./configure --with-single-user --with-booby --with-git-ffmpeg=https://git.ffmpeg.org/ffmpeg.git
-make && make install ) 2>1 | tee log
-mv Makefile Makefile.cfg
-cp Makefile.devel Makefile
+./confgure --with-single-user --disable-static-build --without-thirdparty --without-libdpx
\end{lstlisting}
-Since the procedure for obtaining the latest ffmpeg version is not always kept up-to-date and the line numbers will always change, you may have to create that patch first. Generally those line numbers are only updated by a developer when a new stable version with worthwhile features is actually included in the \CGG{} build. FFmpeg is constantly changing and many times the git version is not as stable as desired.
+The library, libdpx, is just such an example of lost functionality: this build of \CGG{} will not be able to use the DPX format.
\section{Configuration Features}
\label{sec:configuration_features}
--enable-openjpeg build openjpeg (auto)
--enable-libogg build libogg (auto)
--enable-libsndfile build libsndfile (auto)
+ --enable-libsvtav1 build libsvtav1 (no)
--enable-libtheora build libtheora (auto)
--enable-libuuid build libuuid (yes)
--enable-libvorbis build libvorbis (auto)
--with-opencv opencv=sys/sta/dyn,git/tar=url (auto)
--with-numa system has libnuma (auto)
--with-openexr use openexr (auto)
+ --with-onevpl use Intel hardware oneAPI Video Processing Library (no)
Some influential environment variables:
CC C compiler command
The rule targets create the set of thirdparty packages which are built from local source archive copies of thirdparty source code and patches, if needed. The build rule set of dependencies allows for compiling multiple thirdparty programs simultaneously using maximum computer resources. This parallel build speeds up the process considerably. For example, these are full static build timings on the production build machine (full build includes building all thirdparty programs as well as all of \CGG{}):
-\hspace{2em}
-\begin{tabular}{@{}rcr}
- 1 cpu & = & 61 mins\\
- 12 cpus & = & 7.5 mins\\
- 24 cpus & = & 2 mins\\
-\end{tabular}
+\begin{center}
+ \begin{tabular}{@{}lcr}
+ 1 cpu & = & 61 mins\\
+ 12 cpus & = & 7.5 mins\\
+ 24 cpus & = & 2 mins\\
+ \end{tabular}
+\end{center}
\section{Using the very latest Libraries}
\label{sec:latest_libraries}
\textbf{dav1d}
\begin{description}[noitemsep]
- \item Status - currently \CGG{} is staying at 0.5. This is disappointing because there
-may be speed gains in version 0.6 that would be beneficial. However, it is usable for decoding
+ \item Status - currently \CGG{} is staying at 0.5.1. This is disappointing because there
+may be speed gains in later versions that would be beneficial. However, it is usable for decoding
whereas libaom is a lot slower. Unfortunately, it has no effective encoder.
- \item Problem - 0.6 dav1d requires NASM 2.14 and uses instructions like vgf2p8affineqb,
+ \item Problem - 0.6 dav1d requires NASM 2.14 (and later versions of dav1d use even later versions of NASM) and uses instructions like vgf2p8affineqb,
not exactly an "add" instruction. It also uses meson which is not widely available on all
-distros. The only distros that are built for \CGG{} that are at 2.14 are the latest version
-of Arch, Debian(10), Gentoo, Tumbleweed, and Fedora(31). The rest are at 2.12 and 2.13 including
-the most widely used Ubuntu. The NASM requirement apparently provides for using AVX-512
+distros. The more recent NASM requirement apparently provides for using AVX-512
instructions (like vgf2p8affineqb, which is more like a whole routine than a simple instruction).
\item Workaround already in use by \CGG{} - a Makefile was generated to replace Meson usage
-but has to be continuously updated for new releases. Dav1d 0.5 requires NASM 2.13 so at this level
-the newer distros mostly work. The availability of meson and nasm are a significant problem on
+but has to be continuously updated for new releases. Dav1d 0.5.1 requires NASM 2.13 so at this level
+the newer distros will work. The availability of meson and nasm are a significant problem on
many systems which are still in wide use.
\item Your workaround - Because a request to dav1d developers to consider changes to
ensure their library is more widely usable does not appear to be in their future, since it works
-for them, you can upgrade NASM to 2.14 to stay up to date. Of course, install meson also.
+for them, you can upgrade NASM to 2.14 to stay up to date. Even then, you will have to build using meson and incorporate it into \CGG{}.
\end{description}
\textbf{OpenExr}
\begin{description}[noitemsep]
- \item Status - currently at latest version
- \item Problem - the OpenExr tarball is not a single package but is 2 packages instead
+ \item Status - stable at 2.4.1 from February 2020.
+ \item Problem - the OpenExr tarball is not a single package but is 2 packages instead.
\item Workaround already in use by \CGG{} - reworked the packages so that it looks like
-one package with 2 stubs
- \item Your workaround - perhaps use the same workaround
+one package with 2 stubs.
+ \item Your workaround - perhaps use the same workaround.
\end{description}
\textbf{OpenCV}
\begin{description}[noitemsep]
- \item Status - 2 different versions specific for O/S but none for Ubuntu 14, 32 or 64 bit
+ \item Status - 2 different versions specific for O/S but none for Ubuntu 14, 32 or 64 bit.
\item Problem - There are really 2 problems here. The first is OpenCV is not really
"Open" in that Surf is patented/non-free and there is no actual source available for certain
capabilities. The second is that cmake 3.5.1 is required for OpenCV 4.2.
- \item Workaround already in use by \CGG{} - using 3.4.1 for older distros and 4.2 for newer
+ \item Workaround already in use by \CGG{} - using 3.4.1 for older distros and 4.2 for newer.
\item Your workaround - upgrade cmake to 3.5.1 for upgrade to 4.2; add non-free to the
compile; and use binaries that you do not know what they contain since no source code to compile.
-Look into opencv4/opencv2/core/types.hpp:711;27
-\end{description}
-
-\textbf{webp}
-\begin{description}[noitemsep]
- \item Status - currently at version 1.1.0
- \item Problem - requires cmake 3.5
- \item Workaround already in use by \CGG{} - leaving out of Ubuntu14, Ubuntu, Centos7
- \item Your workaround - upgrade on those systems to cmake 3.5
+Look into opencv4/opencv2/core/types.hpp:711;27.
\end{description}
\textbf{libaom}
\begin{description}[noitemsep]
- \item Status - currently at version 3.0.0 for older O/S and 3.1.1 for newer O/S
- \item Problem - requires cmake 3.5 at 3.0.0 and 3.6 for 3.1.1
- \item Workaround already in use by \CGG{} - leaving out of Ubuntu14, Ubuntu, Centos7
- \item Your workaround - upgrade on those systems to cmake 3.6
+ \item Status - currently at version 3.6.0 for older O/S and 3.8.0 for newer O/S.
+ \item Problem - requires cmake 3.5 at v3.6.0 and 3.7.2 for v3.8.0.
+ \item Workaround already in use by \CGG{} - modify configure.ac to switch from 3.8.0 to 3.6.0 for Ubuntu 16 and delete thirdparty/src/libaom-v3.8.0*.*.
+ \item Your workaround - upgrade on some systems to cmake 3.7.2, switch to using 3.6.0 as in last sentence, or add --libaom-enable=no to configure line when building.
\end{description}
\textbf{x10tv}
\begin{description}[noitemsep]
- \item Status - this is the x10 TV remote control
- \item Problem - INPUT\_PROP\_POINTING\_STICK not defined error on older distros
- \item Workaround already in use by \CGG{} - leaving out of Ubuntu14, Ubuntu, Centos7
- \item Your workaround - look into /usr/include/linux/input-event-codes.h
+ \item Status - this is the x10 TV remote control.
+ \item Problem - INPUT\_PROP\_POINTING\_STICK not defined error on older distros.
+ \item Workaround already in use by \CGG{} - leaving out of Ubuntu14, Ubuntu, Centos7.
+ \item Your workaround - look into /usr/include/linux/input-event-codes.h.
\end{description}
-\textbf{libvpx}
+\textbf{lv2 plugins, consisting of 6 routines}
\begin{description}[noitemsep]
- \item Status - currently at version 1.8.1
- \item Problem - when decoding a test file, it failed to correctly load to the timeline
- \item Workaround already in use by \CGG{} - not upgrading to 1.8.2
- \item Your workaround - no analysis for a solution has been performed yet
+ \item Status - currently at version 1.18.0 for lv2 and different for other 5.
+ \item Problem - the current versions use cmake but the updated versions now all use meson and \CGG{} is not set up to handle that.
+ \item Workaround already in use by \CGG{} - not upgrading at this time.
+ \item Your workaround - if you are familiar with meson, you can independently upgrade the 6 routines.
\end{description}
\section{Find Lock Problems with Booby Trap}
\label{sec:valgrind_support_level}
\index{build!valgrind}
-Valgrind is a memory mis-management detector. It shows you memory leaks, deallocation errors, mismanaged threads, rogue reads/writes, etc. \CGG{} memory management is designed to work with Valgrind detection methods. This assists in developing reliable code. Use of Valgrind points out problems so that they can be fixed. For example, when this version of \CGG{} shuts down, it deallocates memory instead of just stopping, thus making memory leak detection possible.
+Valgrind is a memory mis-management detector. It shows you memory leaks, deallocation errors, mismanaged threads, rogue reads/writes, etc. \CGG{} memory management is designed to work with Valgrind detection methods. This assists in developing reliable code. Use of Valgrind points out problems so that they can be fixed. For example, when this version of \CGG{} shuts down, it deallocates memory instead of just stopping, thus making memory leak detection possible. An alternative to Valgrind is
+\textit{heaptrack} which has good documentation and for large programs can run
+faster. You can find the source and information about it at {\small\url{https://github.com/KDE/heaptrack}}.
+\index{valgrind!heaptrack}
The best way to compile and run valgrind is to run the developer static build. This takes 2 steps and you must already have gdb and valgrind installed:
deadly. The listing of the memory leaks can be quite voluminous so locating the \textit{LEAK SUMMARY} section
towards the end of the report is most useful.
+Another very useful valgrind run to locate unitialized variables while executing is:
+
+\hspace{2em}\texttt{valgrind -{}-log-file=/tmp/log -{}-tool=memcheck\\
+ -{}-num-callers=32 ./ci}
\section{CFLAGS has -Wall}
\label{sec:cflags_has_-wall}
Because the appimage file is nothing more than a compressed file containing the same structure as the installed program plus other libraries that allow the program to run independently from the system, the content can be extracted so that you can work on it as you would have on the normally installed program. To do this you will need the appimage management program.
Many Linux distros come with this managment program by default, but others may not. For instance in the case of Arch Linux the \texttt{appimagetool-bin} package from AUR needs to be installed.
-To work on the appimage, first unpack it using the command\protect\footnote{Example provided by Glitterball3}:
+To work on the appimage, first unpack it using the command\protect\footnote{Example provided by Glitterball3} (note that you do not have to be root to do any of the following):
\begin{lstlisting}[numbers=none]
/{path to appimage}/CinGG-yyyymmdd.AppImage --appimage-extract
Start by downloading the \CGG{} source from Cinelerra's git. The last parameter is a directory name of your choice, the directory must not exist. As example, the name \textit{cinelerra5} is used.
\begin{lstlisting}[numbers=none]
- git clone --depth 1 "git://git.cinelerra-gg.org/goodguy/cinelerra.git" cinelerra5
+ git clone --depth 1 git://git.cinelerra-gg.org/goodguy/cinelerra.git cinelerra5
\end{lstlisting}
The source will be in a subdirectory \texttt{cinelerra-5.1} of the directory created by the \textit{git clone} operation.
If context-sensitive help is needed, download the manual sources too, with a different destination directory.
\begin{lstlisting}[numbers=none]
- git clone --depth 1 "git://git.cinelerra-gg.org/goodguy/cin-manual-latex.git" cin-manual-latex
+ git clone --depth 1 git://git.cinelerra-gg.org/goodguy/cin-manual-latex.git cin-manual-latex
\end{lstlisting}
Then move to the \texttt{/\{path to cinelerra-5.1}/\} directory.
There are two preliminaries to do before running the script:
-1- If context sensitive help in the appimage version is required, the source of the manual and the tools (packages) to build it must be on the system. In the bld\_appimage.sh script, set the variable \texttt{MANUAL\_DIRECTORY=\$(pwd)/../../ cin-manual-latex} to the path of the source of the manual. If the variable is empty, or the specified directory does not exist, \CGG{} will be built without built-in help. The path to the manual source can be an absolute or relative one.
+1- If context sensitive help in the appimage version is required, the source of the manual and the tools (packages) to build it must be on the system. In the bld\_appimage.sh script, set the variable \texttt{MANUAL\_DIRECTORY=\$(pwd)/../../ cin-manual-latex} to the path of the source of the manual. If the variable is empty, or the specified directory does not exist, \CGG{} will be built without built-in help. The path to the manual source can be an absolute or relative one. An easier method to include the help from the manual rather than having to install a bunch of latex building software, is to simply download the latest tgz version from {\small\url{https://cinelerra-gg.org/download/images/HTML_Manual-202xxxxx.tgz}}.
+Then extract the files using "tar xvf" into the cinelerra AppDir/usr/bin/doc directory.
+This alternative method may not contain the most recent changes to the Manual but rather will contain what had been checked into Git by the date of the tgz file.
-2- The script bld\_appimage.sh uses a platform specific version of appimagetool so that it can create appimages for \textit{x86\_64}, \textit{i686}, \textit{aarch64}, or \textit{armv7l} architecture. We need to add appimagetool-(platform).AppImage to the \texttt{/\{path to cinelerra- 5.1\}/tools} directory, or somewhere in your path. You can download the tool for your system (e.g. appimagetool-x86\_64.AppImage) from git: {\small\url{https://github.com/AppImage/AppImageKit/releases}}
+2- The script bld\_appimage.sh uses a platform specific version of appimagetool so that it can create appimages for \textit{x86\_64}, \textit{i686}, \textit{aarch64}, or \textit{armv7l} architecture. We need to add appimagetool-(platform).AppImage to the \texttt{/\{path to cinelerra- 5.1\}/tools} directory, or somewhere in your path. You can download the tool for your system (e.g. appimagetool-x86\_64.AppImage) from git: {\small\url{https://github.com/AppImage/AppImageKit/releases}}. Be aware of the possibility that an older appimagetool from 2020 might work better on some systems compared to the latest release.
Always remember to make it executable. The four supported platforms are:
structure of the HTML manual itself, the new system-wide version of the script will be copied into .bcast5
again to ensure context help functionality. The older version of the script will be backed up (with the .bak
suffix) as a reference of the modifications done by the user.
+
+\section{Unique Blend plugins workflow}
+\label{sec:ba_bp_workflow}
+
+The uniqueness of the Blend Algebra/Blend Program plugins, different then most of the other plugins, provides a universal tool for creating new effects and may be handy in unexpected cases.
+These 2 plugins provide the advanced feature of combining and modifying color pixels and transparencies of several tracks according to a mathematic algorithm written entirely by the user in the form of compact and simple piece of code. Such user defined algorithms (blend functions) are compiled, dynamically linked and immediately executed inside \CGG{} on the fly without the need to restart the application, reload the project, reattach plugins, and so on.
+Because of this uniqueness, the Blend Algebra and Blend Program plugins workflow is documented here to explain how they work. Additional helpful information for developers can be found in the section on the Blend Algebra/Blend Program plugins themselves at \nameref{sub:blend_algebra}. You will most likely need to read parts to get a better understanding.
+
+\subsection{Preparation phase}%
+\label{preparatio_phase}
+
+As in any realtime plugin, the \texttt{process\_buffer()} method in Blend Algebra/Blend Program gets a set of frames from the tracks the plugin is attached to. Then the following events take place. It is checked if the configuration (the parameters) got changed. There is one parameter which requires special treatment, the \textit{user function name}.
+
+In order to prevent resource consuming recompilation of the functions on each new frame, the plugin maintains the successfully compiled and attached functions in cache. If at some keyframe the function name gets changed, the plugin searches if this function is already known and cached. In addition to important function related objects such as entry points, there is a timestamp representing when this function was last checked to be up to date.
+
+If the current function name is empty, it means a function is not used. Nothing else has to be done; all tracks are fetched and passed along in the processing pipeline unchanged. If the function is not empty and seen for the first time, or its timestamp is older than the global timestamp, it is checked as follows.
+
+\begin{enumerate}
+\item File lock is placed on the function source file to prevent concurrent modification of object files in case of several simultaneous compilations.
+\item Compilation script \texttt{BlendAlgebraCompile.pl}/\texttt{BlendProgramCompile.pl} is started. The script checks if the resulting shared object file exists and is newer than the source and recompiles it if not.
+\item The plugin checks if the shared object timestamp became newer than the timestamp of this function in cache (if any). If the cached version of the function in memory is up to date, it stays there. If not, the outdated function is detached from the plugin, the updated one is reattached, and its entry points are fetched and put into cache. The function's timestamp in cache is set to the current time since the function has just been checked.
+\end{enumerate}
+
+While recompiling or dynamic linking, various things may go wrong.
+\begin{enumerate}
+\item If in the unlikely scenario where the given function file does not exist, the program does nothing, same as for an empty function. No error message is shown in this case in order to prevent a possible deadlock.
+\item If recompilation was unsuccessful because of a syntax error, the error message is shown. More detailed diagnostics from the compiler can be seen in the terminal window in which \CGG{} was started.
+\item If compilation succeeded, but dynamic linking did not, the error message is shown. In case of any error, the failed function is marked with the current timestamp in cache so that the error messages appear only once before the global timestamp gets updated.
+\end{enumerate}
+
+Updating global timestamp forces all cached functions to be checked for recompilation when first accessed. The global timestamp is updated when the following events occur: changing function name, pressing \texttt{Edit...} or \texttt{Refresh} button, or exiting the \texttt{Attach...} dialog with the \texttt{OK} button.
+
+If the current active function is up to date and attached, the plugin fetches video frames from all of the affected tracks with the important parameters like frame width and height. Then the \texttt{INIT} phase of the function is executed (once for each frame). Several important parameters are requested as defined by the function. They are:
+
+\begin{itemize}
+ \item Working color space needed inside the function. If it is not the same as the color space of the project, then color space conversions have to be done.
+ \item The required number of tracks the function works on. If less than the required number of tracks is available, an error message is shown and the function is not executed.
+ \item Whether the function supports parallelizing or not. If the function does not claim parallelizing support, it will be executed sequentially even if the \textit{PARALLE\_REQUEST} checkbox is ON in the plugin GUI.
+\end{itemize}
+
+\subsection{Processing phase}%
+\label{processing_phase}
+
+After the preparation phase, the processing itself takes place. If running sequentially instead of parallel, the following is done for each frame pixel individually.
+
+For each input track, the corresponding pixel is split into color components according to the actual color space of the project. All color components are converted to float (type of the C language) in the ranges $[0.0 .. 1.0]$ for R, G, B, A, Y or $[-0.5 .. +0.5]$ for U, V. If the project color space has no alpha channel, the alpha component is set to A=1.0.
+
+If the function uses a different color space than the project uses, the required conversions are performed. The key color components (selected in the plugin GUI) are also converted to the function color space in the same manner.
+
+For Blend Algebra, the values for output are preinitialized from the track which the output is to go to. All the other tracks are cleared if the corresponding checkbox in the plugin GUI is checked. For Blend Program, this step is not needed.
+
+\subsection{User function phase}%
+\label{user_function_phase}
+
+The user function is called with the parameters: actual \textit{number of tracks}, \textit{4 arrays for the 4 color components} (dimensions are equal to the number of tracks), \textit{4 key color components}, \textit{current pixel coordinates} (x, y: upper left frame corner is (0,0), lower right is (width-1,height-1)), and \textit{width and height}. In addition for Blend Algebra, there ae \textit{placeholders} for the result.
+
+The user function returns. First of all, the color components of the relevant pixels are clipped to range if the corresponding checkbox in the plugin GUI is on. Relevant for Blend Program are pixels of all the tracks (all tracks can be modified); for Blend Algebra the result only.
+
+After optional clipping, the color components are checked for not being NaNs.
+If so, the pixel is replaced with the substitution color, then backconversion to the project color space is performed.
+
+If the project has no alpha channel, but the function returned something not equal to 1.0, alpha channel is simulated as if an opaque black background were used.
+
+If the project color space is not FLOAT, unconditional clipping followed by 8-bit transformation takes place.
+
+For Blend Algebra, the result is placed in the right output track. For Blend Program, this step is unnecessary since all tracks are modified in place.
+
+Then the loop is repeated for the next pixel in a row, next row in a frame, next frame in the whole sequence, and so on.
+
+If the function is to be run parallelized, the necessary number of threads is created (as defined in \texttt{Settings $\rightarrow$ Preferences $\rightarrow$ Performance; Project SMP cpus}). Running parallel on 1 cpu is not exactly the same as running sequential because an extra processing thread is created, while sequential execution takes place inside the main plugin thread. This could induce some subtle differences if the function uses static variables inside or something else which is thread unsafe.
+