+\subsubsection{General Purpose}
+\label{ssub:ffmpeg_video_general_purpose}
+
+These are also called Delivery codecs. They are the most used and widespread being suitable for streaming, video sharing, watching TV, smartphones, plus more. Because of lossy compression type Interframe, they produce smaller files with variable quality. They are not suitable for editing, compositing and color correction. Further rendering of these formats worsens the quality exponentially. The most used codecs have hardware support (vaapi, vdpau, nvenc) that make them more efficient.
+
+\begin{description}
+ \item[MOV] Created by Apple. It is a suitable format for editing because it organizes the files within the container into hierarchically structured "atoms" described in a header. This brings simplicity and compatibility with various software and does not require continuous encoding/decoding in the timeline.
+ \newline Presets: \textit{Presets: mov}
+ \item[QT] Different exstension, but it is always mov.
+ \newline Presets: \textit{mjpeg, DV, Div, CinePack}
+ \item[MP4] The most popular. Many other formats belong to this family (MPEG);
+ \newline h264 is actually x264, open, highly configurable and documented; h265/HEVC is actually x265, open, highly configurable and documented. x264-5 is for encoding only.
+ \newline Presets: \textit{h265, h265, mjpeg, mpeg2, obs2youtube}
+ \item[WEBM] Open; similar to mp4 but not as widespread (it is used by YouTube). It belongs to the Matroska family. In \CGG{} there are specific Presets with \texttt{.youtube} extension, but they are still webm.
+ \newline Presets: \textit{VP8, VP9, AV1}
+ \item[MKV] Open, highly configurable and widely documented. It might have seeking problems. It belongs to the Matroska family.
+ \newline Presets: \textit{Theora, VP8, VP9}
+ \item[AVI] Old and limited format (no multistreams, no subtitles, limited metadata) but with high compatibility.
+ \newline Presets: \textit{asv, DV, mjpeg, xvid}
+ \item[MPG] Parent of the MPEG family, to which MP4 also belongs. Mpeg is used by \CGG{} as default for proxies and mpeg-2 is the standard for Video DVDs.
+ \newline Presets: \textit{mpeg, mpeg2}
+\end{description}
+
+\subsubsection{Note on Matroska (mkv) container}
+\label{ssub:note_mkv_container}
+\index{mkv}
+
+Matroska is a modern universal container that is Open Source so there is lots of ongoing development with community input along with excellent documentation. Also derived from this format is the \textit{Webm} container used by Google and YouTube, which use the VP8-9 and AV1 codecs. Although using in \CGG{} is highly recommended, you may have seeking problems during playback. The internal structure of matroskas is sophisticated but requires exact use of internal keyframes (I-frame; B-frame and P-frame) otherwise playback on the timeline may be subject to freeze and drop frames. The mkv format can be problematic if the source encoding is not done well by the program (for example, OBS Studio). For an easy but accurate introduction of codecs and how they work see: {\small\url{https://ottverse.com/i-p-b-frames-idr-keyframes-differences-usecases/}}.
+
+To find out the keyframe type (I, P, B) of your media you can use ffprobe:
+
+\begin{lstlisting}[numbers=none]
+ $ ffprobe -v error -hide_banner-of default=noprint_wrappers=0 -print_format flat -select_streams v:0 -show_entries frame=pict_type input.mkv
+\end{lstlisting}
+
+\textbf{-v error -hide\_banner:} serves to hide a blob of information that is useless for our purposes.
+
+\textbf{-of:} is an alias for \textit{-print\_format} and is used to be able to use \textit{default=noprint\_wrappers =0}.
+
+\textbf{-default=noprint\_wrappers=0:} is used to be able to show the information from the parsed stream that we need.
+
+\textbf{-print\_format flat:} is used to display the result of ffprobe according to a \textit{flat} format (you can choose CSV, Json, xml, etc).
+
+\textbf{-select\_streams v:0:} is used to choose the first stream (0) in case there are multiple audio and video streams (tracks, in \CGG{}).
+
+\textbf{-show\_entries:} shows the type of data collected by ffprobe that we want to display (there are also types: \texttt{\_streams}, \texttt{\_formats}, \texttt{\_packets}, and \texttt{\_frames}. They are called \textit{specifiers}).
+
+\textbf{-frame=pict\_type:} within the chosen specifier indicates the data to be displayed; in this case \textit{pict\_type}, that is, the keyframe type (I, P, B) of the frame under consideration.
+
+\textbf{input.mkv:} is the media to be analyzed (it can be any container and codec).
+
+(see {\small\url{https://ffmpeg.org/ffprobe.html}} for more details)
+
+We thus obtain a list of all frames in the analyzed media and their type. For example:
+
+\begin{lstlisting}[numbers=none]
+ frames.frame.0.pict_type="I"
+ frames.frame.1.pict_type="P"
+ frames.frame.2.pict_type="B"
+ frames.frame.3.pict_type="B"
+ frames.frame.4.pict_type="B"
+ ...
+\end{lstlisting}
+
+There are also 2 useful scripts that not only show the keyframe type but also show the GOP length of the media. They are zipped tars with readme's at: \newline
+{\small\url{https://cinelerra-gg.org/download/testing/getgop_byDanDennedy.tar.gz}} \newline
+{\small\url{https://cinelerra-gg.org/download/testing/iframe-probe_byUseSparingly.tar.gz}}
+
+We can now look at the timeline of \CGG{} to see the frames that give problems in playback. Using a codec of type Long GOP, it is probably the rare I-frames that give the freezes.
+To find a solution you can use MKVToolNix ({\small\url{https://mkvtoolnix.download/}}) to correct and insert new keyframes into the mkv file (matroska talks about \textit{cues data}). It can be done even without new encoding. Or you can use the \texttt{Transcode} tool within \CGG{} because during transcoding new keyframes are created that should correct errors.
+
+\subsubsection{Image Sequences}
+\label{ssub:ffmpeg_image_sequences}
+
+The image sequences can be uncompressed, with lossy or lossless compression but always Intraframe. They are suitable for post-processing that is compositing (VFX) and color correction. Note: even if \CGG{} outputs fp32, exr/tiff values there are normalized to 0-1.0f.
+
+\begin{description}
+ \item[DPX] Film standard; uncompressed; high quality. \textit{Log} type.
+ \item[PNG] Uncompressed or lossless compression. Supports alpha channel.
+ \item[WEBP, TIFF, GIF, JPEG, ...] Variable compression, size and quality.
+\end{description}
+
+\subsubsection{Old Pro Formats}
+\label{ssub:ffmpeg_old_pro_formats}
+
+Some formats, though used in the past in the pro field, are disappearing with the evolution of technologies. DVD is becoming more and more niche, while Bluray is still widespread (also as a backup); DV/HDV remains only as a support for old Camcorders with magnetic tapes. DV is still a quality format, with intraframe compression; HDV is mpeg-2 compressed.
+
+\begin{description}
+ \item[AVI] old and limited format but with high compatibility.
+ \newline Presets: \textit{DV\_pal, DV\_ntsc, mjpeg}
+ \item[QT] belongs to the Apple mov family.
+ \newline Presets: \textit{DV, mjpeg}
+ \item[M2TS] format for Bluray (mpeg4). Bluray player devices need a standard Bluray disc structure (bdwrite) for playback\protect\footnote{\CGG{} offers specific functionality for creating DVDs/Blurays}.
+ \newline Presets: \textit{AVC422, Lossless, Bluray, hevc}
+ \item[MP4] Belongs to the MPEG family. Motionjpeg has jpeg compression, then Intraframe, so it maintains good quality and fluidity in editing. It is now an old and limited codec.
+ \newline Presets: \textit{mjpeg}
+\end{description}
+
+\subsection{Audio FFmpeg Formats}%
+\label{sub:FFmpeg_audio}
+
+Audio formats and codecs take much less resources and space than video ones, so they are often used without compression for maximum quality. However these are compressed formats and codecs widely used in streaming and sharing.
+
+\subsubsection{High Quality}
+\label{ssub:ffmpeg_audio_high_quality}
+
+\begin{description}
+ \item[FLAC] Open; used for storing music. It has lossless compression.
+ \newline preset: \textit{flac}
+ \item[PCM] Raw format that encodes the signal with \textit{modified pulse modulation} (pcm). FFmpeg does not support pcm audio if you use mp4 as a container.
+ \newline Presets: \textit{s8, s16, s24, s32}
+ \item[WAV] Raw format created by Microsoft. 32-bit addressing leading to the 4 GB recording limit. It is a widely used standard.
+ \newline Presets: \textit{s24le, s32le}
+ \item[W64] Wave format created by Sony to override the 4GB recording limit. Poorly supported.
+ \newline Presets: \textit{s16le, s24le, s32le}
+ \item[MKA] Open, highly configurable and documented. It belongs to the Matroska family. Uncompressed pcm type.
+ \newline Presets: \textit{s16le, s24le, s32le}
+ \item[ALAC] Apple's codec, free to use but not open source. It is lossless and of high quality but is slower than other similar codecs.
+ \newline Presets: \textit{m4a, mkv, qt}
+\end{description}
+
+\subsubsection{General Purpose}
+\label{ssub:ffmpeg_audio_general_purpose}
+
+\begin{description}
+ \item[MP3] Belongs to the MPEG family. The most widely used in streaming and sharing.
+ \newline preset: \textit{mp3}
+ \item[OGG] Open, highly configurable and documented. It belongs to the Matroska family. Flac has lossless compression; opus is compressed but modern and of good quality, superior to mp3. Vorbis is compressed and dated, but lightweight and compatible.
+ \newline Presets: \textit{flac, opus, vorbis}
+ \item[PRO] Created by Apple; compressed audio codec, competing with mp3.
+ \newline Presets: \textit{aac256k}
+\end{description}
+
+\subsection{\CGG{} Internal Engine}%
+\label{sub:internal_engine}
+
+FFmpeg is the default engine, but you can also use its internal engine, which is limited in supported formats but efficient and of high quality.
+
+\subsubsection{Video general purpose}
+\label{ssub:internal_general_purpose}
+
+\begin{description}
+ \item[RAW DV] supports the DV standard.
+ \newline Presets: \textit{dv}
+ \item[MPEG Video] highly configurable. Extension \texttt{.m2v}.
+ \newline Presets: \textit{mpeg1, mpeg2}
+ \item[OGG Theora/Vorbis] Open, easily configurable. Theora for video, Vorbis for audio.
+ \newline Presets: \textit{theora, vorbis}
+\end{description}
+
+\subsubsection{Image Sequences}
+\label{sub:internal_image_sequences}
+
+There are quite a few formats available. Note: even if \CGG{} outputs fp32, exr/tiff values there are normalized to 0-1.0f.
+
+\begin{description}
+ \item[EXR Sequence] OpenEXR (Open Standard) is a competing film standard to DPX, but \textit{Linear} type.
+ \item[Ppm Sequence] is RGB Raw.
+ \item[Tga Sequence] is RGB(A) compressed or uncompressed.
+ \item[Tiff Sequence] is RGB(A) or RGB(A)-Float with various compression types.
+ \item[Jpg, gif Sequences] lossy compressed and limited formats.
+\end{description}
+
+\subsubsection{Audio general purpose}
+\label{sub:internal_audio_general_purpose}
+
+\begin{description}
+ \item[AC3] widely used multichannel standard (Dolby Digital). Format with lossy compression.
+ \newline Presets: \textit{ac3}
+ \item[Apple/SGI AIFF] Created by Apple; is an uncompressed format (pcm type) or with 32/64-bit floating point compression.
+ \newline Presets: \textit{aif}
+ \item[Sun/Next AU] created by Sun and used in Unix environment, now in disuse. It can be of pcm type or with lossy compression.
+ \newline Presets: \textit{au}
+ \item[Flac] Open, lossless compression, very good quality.
+ \newline preset: \textit{flac}
+ \item[Microsoft WAV] created by Microsoft. It can have 16-24-32-bit linear or float compression.
+ \newline Presets: \textit{wav}
+ \item[MPEG Audio] Very widespread standard. Extension \texttt{.mp3}.
+ \newline Presets: \textit{mp3}
+\end{description}
+
+\section{Overview on Color Management}%
+\label{sec:overview_color_management}
+\index{color!management}
+
+\CGG{} does not have support for ICC color profiles or global color management to standardize and facilitate the management of the various files with which it works. But it has its own way of managing color spaces and conversions; let's see how.
+
+\subsection{Color Space}%
+\label{sub:the_color_spaces}
+
+A color space is a subspace of the absolute CIE XYZ color space that includes all possible, human-visible color coordinates (therefore makes human visual perception mathematically tractable). CIE XYZ is based on the RGB color model and consists of an infinite three-dimensional space but characterized (and limited) by the xyz coordinates of five particular points: the Black Point (pure black); the White Point (pure white); Reddest red color (pure red); Greenest green color (pure green); and Bluest blue color (pure blue). All these coordinates define an XYZ matrix. The color spaces are submatrices (minors) of the XYZ matrix. The absolute color space is device independent while the color subspaces are mapped to each individual device. For a more detailed introduction see: \small\href{https://peteroupc.github.io/colorgen.html}{https://peteroupc.github.io/colorgen.html}
+\normalsize A color space consists of primaries (\textit{gamut}), transfer function (\textit{gamma}), and matrix coefficients (\textit{scaler}).
+
+\begin{description}
+ \item[Color primaries]: the gamut of the color space associated with the media, sensor, or device (display, for example).
+ \item[Transfer characteristic function]: converts linear values to non-linear values (e.g. logarithmic). It is also called Gamma correction.
+ \item[Color matrix function] (scaler): converts from one color model to another. $RGB \leftrightarrow YUV$; $RGB \leftrightarrow Y'CbCr$; etc.
+\end{description}
+
+The camera sensors are always RGB and linear. Generally, those values get converted to YUV in the files that are produced, because it is a more efficient format thanks to chroma subsampling, and produces smaller files (even if of lower quality, i.e. you lose part of the colors data). The conversion is nonlinear and so it concerns the "transfer characteristic" or gamma. The encoder gets input YUV and compresses that. It stores the transfer function as metadata if provided.
+
+\subsection{CMS}%
+\label{sub:cms}
+
+A color management system (CMS) describes how it translates the colors of images/videos from their current color space to the color space of the other devices, i.e. monitors. The basic problem is to be able to display the same colors in every device we use for editing and every device on which our work will be viewed. Calibrating and keeping our hardware under control is feasible, but when viewed on the internet or DVD, etc. it will be impossible to maintain the same colors. The most we can hope for is that
+there are not too many or too bad alterations. But if the basis that we have set up is consistent, the alterations should be acceptable because they do not result from the sum of more issues at each step. There are two types of color management: \textit{Display referred} (DRC) and \textit{Scene referred} (SRC).
+
+\begin{itemize}
+ \item \textbf{DRC} is based on having a calibrated monitor. What it displays is considered correct and becomes the basis of our color grading. The goal is that the colors of the final render will not change too much when displayed in other hardware/contexts. Be careful to make sure there is a color profile for each type of color space you choose for your monitor. If the work is to be viewed on the internet, be sure to set the monitor in \textit{sRGB} with its color profile. If for HDTV we have to set the monitor in \textit{rec.709} with its color profile; for 4k in \textit{Rec 2020}; for Cinema in \textit{DCP-P3}; etc.
+ \item \textbf{SRC} instead uses three steps:
+ \begin{enumerate}
+ \item The input color space: whatever it is, it can be converted manually or automatically to a color space of your choice.
+ \item The color space of the timeline: we can choose and set the color space on which to work.
+ \item The color space of the output: we can choose the color space of the output (on other monitors or of the final rendering).
+ \end{enumerate}
+ \textit{ACES} and \textit{OpenColorIO} have an SRC workflow. Please note that the monitor must still be calibrated to avoid unwanted color shifts.
+ \item There is also a third type of CMS: the one through the \textbf{LUTs}. In practice, the SRC workflow is followed through the appropriate 3D LUTs, instead of relying on the internal (automatic) management of the program. The LUT combined with the camera used to display it correctly in the timeline and the LUT for the final output. Using LUTs, however, always involves preparation, selection of the appropriate LUT and post-correction. Also, as they are fixed conversion tables, they can always result in clipping and banding.
+\end{itemize}
+
+\subsection{Display}%
+\label{sub:display}
+
+Not having \CGG{} a CMS, it becomes essential to have a monitor calibrated and set in sRGB that is just the output displayed on the timeline of the program. You have these cases: