\newline Presets: \textit{DNxHR, ffv1, AVC\_Intra\_100}
\item[MOV] Created by Apple. It is a suitable format for editing because it organizes the files within the container into hierarchically structured \textit{atoms} described in a header. This brings simplicity and compatibility with various software and does not require continuous encoding/decoding in the timeline.
\newline Presets: \textit{DNxHR, ffv1, CineformHD, huffyuv}
- \item[PRO] Different extension, but it is still mov. Prores is proprietary and there are no official encoders except the original Adobe one. The engine used by ffmpeg is the result of reverse engineering and, according to Adobe, does not guarantee the same quality and performance of the original\protect\footnote{https://support.apple.com/en-us/HT200321}.
- \newline Presets: \textit{ProRes}
+ \item[PRO] Different extension, but it is still mov. Prores is proprietary and there are no official encoders except the original Apple one. The engine used by ffmpeg is the result of reverse engineering and, according to Apple, does not guarantee the same quality and performance of the original\protect\footnote{https://support.apple.com/en-us/HT200321}. Option \textit{vendor=apl0} is used to make it appear that the original Apple engine was used.
+ \newline Presets: \textit{ProRes; ProRes\_ks}
\item[QT] Different extension, but it is always mov.
\newline Presets: \textit{DNxHD, magicyuv, raw, utvideo}
\item[MP4] mostly used for General Purpose. It belongs to the large MPEG family.
\item[MP4] The most popular. Many other formats belong to this family (MPEG);
\newline h264 is actually x264, open, highly configurable and documented; h265/HEVC is actually x265, open, highly configurable and documented. x264-5 is for encoding only.
\newline Presets: \textit{h265, h265, mjpeg, mpeg2, obs2youtube}
- \item[WEBM] Open; similar to mp4 but not as widespread (it is used by YouTube). It belongs to the Matroska family. In \CGG{} there are specific Presets with \texttt{.youtube} extension, but they are still webm.
+ \item[WEBM] Open; similar to mp4 but not as widespread (it is used by YouTube). It belongs to the Matroska family. In \CGG{} there are specific Presets with \texttt{.youtube} extension, but they are still webm. For VP9 and AV1 presets, two pass rendering is recommended, which provides higher quality.
\newline Presets: \textit{VP8, VP9, AV1}
- \item[MKV] Open, highly configurable and widely documented. It might have seeking problems. It belongs to the Matroska family.
+ \item[MKV] Open, highly configurable and widely documented. It might have seeking problems. It belongs to the Matroska family. For VP9 presets, two pass rendering is recommended, which provides higher quality.
\newline Presets: \textit{Theora, VP8, VP9}
\item[AVI] Old and limited format (no multistreams, no subtitles, limited metadata) but with high compatibility.
\newline Presets: \textit{asv, DV, mjpeg, xvid}
\label{ssub:note_mkv_container}
\index{mkv}
-Matroska is a modern universal container that is Open Source so there is lots of ongoing development with community input along with excellent documentation. Also derived from this format is the \textit{Webm} container used by Google and YouTube, which use the VP8-9 and AV1 codecs. Although using in \CGG{} is highly recommended, you may have seeking problems during playback. The internal structure of matroskas is sophisticated but requires exact use of internal keyframes (I-frame; B-frame and P-frame) otherwise playback on the timeline may be subject to freeze and drop frames. The mkv format can be problematic if the source encoding is not done well by the program (for example, OBS Studio). For an easy but accurate introduction of codecs and how they work see: \url{https://ottverse.com/i-p-b-frames-idr-keyframes-differences-usecases/}.
+Matroska is a modern universal container that is Open Source so there is lots of ongoing development with community input along with excellent documentation. Also derived from this format is the \textit{Webm} container used by Google and YouTube, which use the VP8-9 and AV1 codecs. Although using in \CGG{} is highly recommended, you may have seeking problems during playback. The internal structure of matroskas is sophisticated but requires exact use of internal keyframes (I-frame; B-frame and P-frame) otherwise playback on the timeline may be subject to freeze and drop frames. The mkv format can be problematic if the source encoding is not done well by the program (for example, OBS Studio). For an easy but accurate introduction of codecs and how they work see: {\small\url{https://ottverse.com/i-p-b-frames-idr-keyframes-differences-usecases/}}.
To find out the keyframe type (I, P, B) of your media you can use ffprobe:
\textbf{input.mkv:} is the media to be analyzed (it can be any container and codec).
-(see \url{https://ffmpeg.org/ffprobe.html} for more details)
+(see {\small\url{https://ffmpeg.org/ffprobe.html}} for more details)
We thus obtain a list of all frames in the analyzed media and their type. For example:
\end{lstlisting}
There are also 2 useful scripts that not only show the keyframe type but also show the GOP length of the media. They are zipped tars with readme's at: \newline
-\small\url{https://cinelerra-gg.org/download/testing/getgop_byDanDennedy.tar.gz} \newline
-\small\url{https://cinelerra-gg.org/download/testing/iframe-probe_byUseSparingly.tar.gz}
+{\small\url{https://cinelerra-gg.org/download/testing/getgop_byDanDennedy.tar.gz}} \newline
+{\small\url{https://cinelerra-gg.org/download/testing/iframe-probe_byUseSparingly.tar.gz}}
We can now look at the timeline of \CGG{} to see the frames that give problems in playback. Using a codec of type Long GOP, it is probably the rare I-frames that give the freezes.
-To find a solution you can use MKVToolNix (\url{https://mkvtoolnix.download/}) to correct and insert new keyframes into the mkv file (matroska talks about \textit{cues data}). It can be done even without new encoding. Or you can use the \texttt{Transcode} tool within \CGG{} because during transcoding new keyframes are created that should correct errors.
+To find a solution you can use MKVToolNix ({\small\url{https://mkvtoolnix.download/}}) to correct and insert new keyframes into the mkv file (matroska talks about \textit{cues data}). It can be done even without new encoding. Or you can use the \texttt{Transcode} tool within \CGG{} because during transcoding new keyframes are created that should correct errors.
\subsubsection{Image Sequences}
\label{ssub:ffmpeg_image_sequences}
-The image sequences can be uncompressed, with lossy or lossless compression but always Intraframe. They are suitable for post-processing that is compositing (VFX) and color correction.
+The image sequences can be uncompressed, with lossy or lossless compression but always Intraframe. They are suitable for post-processing that is compositing (VFX) and color correction. Note: even if \CGG{} outputs fp32, exr/tiff values there are normalized to 0-1.0f.
\begin{description}
\item[DPX] Film standard; uncompressed; high quality. \textit{Log} type.
\subsubsection{Image Sequences}
\label{sub:internal_image_sequences}
-There are quite a few formats available.
+There are quite a few formats available. Note: even if \CGG{} outputs fp32, exr/tiff values there are normalized to 0-1.0f.
\begin{description}
\item[EXR Sequence] OpenEXR (Open Standard) is a competing film standard to DPX, but \textit{Linear} type.
\begin{description}
\item[Color primaries]: the gamut of the color space associated with the media, sensor, or device (display, for example).
\item[Transfer characteristic function]: converts linear values to non-linear values (e.g. logarithmic). It is also called Gamma correction.
- \item[Color matrix function] (scaler): converts from one color model to another. $RGB \leftrightarrow YUV$; $RGB \leftrightarrow YCbCr$; etc.
+ \item[Color matrix function] (scaler): converts from one color model to another. $RGB \leftrightarrow YUV$; $RGB \leftrightarrow Y'CbCr$; etc.
\end{description}
The camera sensors are always RGB and linear. Generally, those values get converted to YUV in the files that are produced, because it is a more efficient format thanks to chroma subsampling, and produces smaller files (even if of lower quality, i.e. you lose part of the colors data). The conversion is nonlinear and so it concerns the "transfer characteristic" or gamma. The encoder gets input YUV and compresses that. It stores the transfer function as metadata if provided.
+\subsection{HDR}%
+\label{sub:hdr}
+
+Before 2016, HDR was understood as values outside the range (0 - 1.0f). Since there were no devices capable of achieving these values the only way available was to create digital images (in Blender, for example) or to merge multiple images with various dynamic ranges into a single HDR image. In 2016, with the arrival of sensors and monitors capable of extended dynamic ranges, HDR was standardized and there is the BT. 2100 color space that has gamut and color model like BT. 2020 but different transfer functions (gamma): Dolby Vision (HDR10), PQ and HLG. BT. 2100 considers values above 1.0f as illegal. There are two ways to return to \textit{legal} values: clipping and tone-mapping. The BT.2100 color space is an efficient standard for encoding HDR colors, but it's not well suited for many rendering and composition (blending) operations.
+
+\CGG{} runs internally in 32-bit floating-point and thus can read and process values outside the range $(0 - 1.0f)$. However, there are plugins and tools that can introduce clipping that must be taken into account to keep the whole pipeline which preserves the original values. The manual indicates which plugins limit pixel values in the range $(0, 1.0f)$. The easiest way to see if a media is out of $(0 - 1.0f)$ range is to use the \texttt{eyedropper} tool (Get Color) in the Compositor window.
+If we use HDR media and need to do a Tone Mapping to make the pixel values fall within the range $(0, 1.0)$, we can use the \texttt{Blender Program} plugin and, inside it, load the function (found in \texttt{=>Syslib}) \textit{tone\_mapping.bp}. Instructions for using this function can be found \href{https://cinelerra-gg.org/download/testing/BlendPluginExamples/Examples.txt}{here} at example 6.
+
\subsection{CMS}%
\label{sub:cms}
-A color management system (CMS) describes how it translates the colors of images/videos from their current color space to the color space of the other devices, i.e. monitors. The basic problem is to be able to display the same colors in every device we use for editing and every device on which our work will be viewed. Calibrating and keeping our hardware under control is feasible, but when viewed on the internet or DVD, etc. it will be impossible to maintain the same colors. The most we can hope for is that
-there are not too many or too bad alterations. But if the basis that we have set up is consistent, the alterations should be acceptable because they do not result from the sum of more issues at each step. There are two types of color management: \textit{Display referred} (DRC) and \textit{Scene referred} (SRC).
+A color management system (CMS) describes how it translates the colors of images/videos from their current color space to the color space of the other devices, i.e. monitors. The basic problem is to be able to display the same colors in every device we use for editing and every device on which our work will be viewed. The color management operation consists of a color space conversion (matrix and 1DLUT). Colors that exceed the display's target color gamut are numerically clipped. Calibrating and keeping our hardware under control is feasible, but when viewed on the internet or DVD, etc. it will be impossible to maintain the same colors. The most we can hope for is that there are not too many or too bad alterations. But if the basis that we have set up is consistent, the alterations should be acceptable because they do not result from the sum of more issues at each step. There are two types of color management: \textit{Display referred} (DRC) and \textit{Scene referred} (SRC). Display-refered: 1.0f (white point) as the reference white level of the display. Scene-refered: 1.0f as the reference white of the used Color Space (nominal)
\begin{itemize}
\item \textbf{DRC} is based on having a calibrated monitor. What it displays is considered correct and becomes the basis of our color grading. The goal is that the colors of the final render will not change too much when displayed in other hardware/contexts. Be careful to make sure there is a color profile for each type of color space you choose for your monitor. If the work is to be viewed on the internet, be sure to set the monitor in \textit{sRGB} with its color profile. If for HDTV we have to set the monitor in \textit{rec.709} with its color profile; for 4k in \textit{Rec 2020}; for Cinema in \textit{DCP-P3}; etc.
Not having \CGG{} a CMS, it becomes essential to have a monitor calibrated and set in sRGB that is just the output displayed on the timeline of the program. You have these cases:
\begin{center}
- \begin{tabular}{ |l|l|l| }
+ \begin{tabular}{|l|l|p{8cm}|}
\hline
\textbf{Timeline} & \textbf{Display} & \textbf{Description} \\
\hline
Let us give an example of color workflow in \CGG{}. We start with a source of type YUV (probably: YCbCr); this is decoded and converted to the chosen color model for the project, resulting in a \textit{temporary}. Various jobs and conversions are done in FLOAT math and the result remains in the chosen color model until further action. In addition, the temporary is always converted to sRGB 8-bit for monitor display only. If we apply the \texttt{ChromaKey (HSV)} plugin, the temporary is converted to HSV (in FLOAT math) and the result in the temporary becomes HSV. If we do other jobs the temporary is again converted to the set color model (or others if there is demand) to perform the other actions. At the end of all jobs, the obtained temporary will be the basis of the rendering that will be implemented according to the choice of codecs in the render window (\textit{Wrench}), regardless of the color model set in the project. If we have worked well the final temporary will retain as much of the source color data as possible and will be a good basis for encoding of whatever type it is.
For practical guidelines, one can imagine starting with a quality file, for example, \textit{10-bit YUV 4.2.2}. You set the project to \texttt{RGBA-FLOAT}; the \texttt{YUV color space} to your choice of Rec709 (for a FullHD) or BT 2020NCL (for UHD) and finally the \texttt{YUV color range} to JPEG. If the original file has the MPEG type color range then you convert to JPEG with the \texttt{ColorSpace} plugin. If you want to transcode to a quality intermediate you can use \textit{DNxHR 422}, or even \textit{444}, and maybe do the editing step with a \textit{proxy}. For rendering you choose the codec appropriate for the file destination, but you can still generate a high-quality master, for example \textit{ffv1 .mov} with lossless compression.
+
+\begin{figure}[htpb]
+ \centering
+ \includegraphics[width=1.0\linewidth]{color_01.png}
+ \caption{Color settings (Settings $\rightarrow$ Format / Settings $\rightarrow$ Preferences)}
+ \label{fig:color_01}
+\end{figure}