-
-\textit{Tracks:}\\
-(in New Project menu only) sets the number of video tracks the new project is assigned. Tracks can be added or deleted later, but options are provided here for convenience.
-
-\textit{Framerate:}\\
-sets the framerate of the video. The project framerate does not have to be the same as an individual media file frame rate that you load. Media is reframed to match the project framerate.
-
-\textit{Canvas size:}\\
-sets the size of the video output. In addition, each track also has its own frame size. Initially, the New Project dialog creates video tracks whose size match the video output. The video track sizes can be changed later without changing the video output.
-
-\textit{Aspect ratio:}\\
-sets the aspect ratio; this aspect ratio refers to the screen aspect ratio. The aspect ratio is applied to the video output. The aspect ratio can be different than the ratio that results from the formula: $\frac{h}{v}$ (the number of horizontal pixels divided into the number of vertical pixels). If the aspect ratio differs from the results of the formula above, your output will be in non-square pixels.
-
-\textit{Auto aspect ratio:}\\
-if this option is checked, the Set Format dialog always recalculates the Aspect ratio setting based upon the given Canvas size. This ensures pixels are always square.
-
-\textit{Color model:}\\
-the project will be stored in the color model video that is selected in the dropdown. Color model is important for video playback because video has the disadvantage of being slow compared to audio. Video is stored on disk in one colormodel, usually a YUV derivative. When played back, Cinelerra decompresses it from the file format directly into the format of the output device. If effects are processed, the program decompresses the video into an intermediate colormodel first and then converts it to the format of the output device. The selection of an intermediate colormodel determines how fast and accurate the effects are. A list of the current colormodel choices follows.\\
-\textbf{RGB-8 bit}
-Allocates 8 bits for the R, G, and B channels and no alpha. This is normally used for uncompressed media with low dynamic range.\\
-\textbf{RGBA-8 bit}
-Allocates an alpha channel to the 8 bit RGB colormodel. It can be used for overlaying multiple tracks.\\
-\textbf{RGB-Float}
-Allocates a 32 bit float for the R, G, and B channels and no alpha. This is used for high dynamic range processing with no transparency.\\
-\textbf{RGBA-Float}
-This adds a 32 bit float for alpha to RGB-Float. It is used for high dynamic range processing with transparency.\\
-\textbf{YUV-8 bit}
-Allocates 8 bits for Y, U, and V. This is used for low dynamic range operations in which the media is compressed in the YUV color space. Most compressed media is in YUV and this derivative allows video to be processed fast with the least color degradation.\\
-\textbf{YUVA-8 bit}
-Allocates an alpha channel to the 8 bit YUV colormodel for transparency.
-
-\noindent \vspace{1ex}In order to do effects which involve alpha channels, a colormodel with an alpha channel must be selected. These are RGBA-8 bit, YUVA-8 bit, and RGBA-Float. The 4 channel colormodels are slower than 3 channel colormodels, with the slowest being RGBA-Float. Some effects, like fade, work around the need for alpha channels while other effects, like chromakey, require an alpha channel in order to be functional. So in order to get faster results, it is always a good idea to try the effect without alpha channels to see if it works before settling on an alpha channel and slowing it down.\\
-When using compressed footage, YUV colormodels are usually faster than RGB colormodels. They also destroy fewer colors than RGB colormodels. If footage stored as JPEG or MPEG is processed many times in RGB, the colors will fade whereas they will not fade if processed in YUV. Years of working with high dynamic range footage has shown floating point RGB to be the best format for high dynamic range. 16 bit integers were used in the past and were too lossy and slow for the amount of improvement. RGB float does not destroy information when used with YUV source footage and also supports brightness above 100\%. Be aware that some effects, like Histogram, still clip above 100\% when in floating point.
-
-\textit{Interlace mode:}\\
-this is mostly obsolete in the modern digital age, but may be needed for older media such as that from broadcast TV. Interlacing uses two fields to create a frame. One field contains all odd-numbered lines in the image; the other contains all even-numbered lines. Interlaced fields are stored in alternating lines of interlaced source footage. The alternating lines missing on each output frame are interpolated.
\ No newline at end of file
+\index{video!attributes}
+
+\begin{description}
+\item[Tracks:] (in New Project menu only) sets the number of video
+tracks the new project is assigned. Tracks can be added or deleted
+later, but options are provided here for convenience.
+
+\item[Framerate:] \index{framerate} sets the framerate of the video. The project
+framerate does not have to be the same as an individual media file
+frame rate that you load. Media is reframed to match the project
+framerate.
+
+\item[Canvas size:] \index{canvas size} sets the size of the video output \index{output size}. In addition,
+each track also has its own frame size. Initially, the New Project dialog creates video tracks whose size match the video output. The video track sizes can be changed later without changing the video output. We have: Project size = $W \times H$ pixels = canvas size = output size
+
+\item[Aspect ratio:] \index{aspect ratio} sets the aspect ratio; this aspect ratio refers to the screen aspect ratio. The aspect ratio is applied to the video output (canvas). It can be convenient to vary the size of the canvas in percentage terms, instead of having to calculate the number of W x H pixels. The aspect ratio can be different than the ratio that results from the formula: $\dfrac{h}{v}$ (the number of horizontal pixels divided into the number of vertical pixels). If the aspect ratio differs from the results of the formula above, your output will be in non-square pixels.
+
+\item[Auto aspect ratio:] if this option is checked, the Set Format
+dialog always recalculates the Aspect ratio setting based upon the
+given Canvas size. This ensures pixels are always square.
+
+\item[Color model:] \index{color!model} the internal color space of \CGG{} is X11 sRGB
+without color profile. \CGG{} always switches to sRGB when applying
+filters or using the compositing engine. Different case for
+decoding/playback or encoding/output; the project will be stored in
+the color model video that is selected in the dropdown. Color model
+is important for video playback because video has the disadvantage
+of being slow compared to audio. Video is stored on disk in one
+colormodel, usually a YUV derivative. When played back, \CGG{}
+decompresses it from the file format directly into the format of the
+output device. If effects are processed, the program decompresses
+the video into an intermediate colormodel first and then converts it
+to the format of the output device. The selection of an
+intermediate colormodel determines how fast and accurate the effects
+are. A list of the current colormodel choices follows.
+
+ \begin{description}
+ \item[RGB-8 bit] Allocates 8\,bits for the R, G, and B channels
+and no alpha. This is normally used for uncompressed media with low
+dynamic range.
+ \item[RGBA-8 bit] Allocates an alpha channel to the 8\,bit RGB
+colormodel. It can be used for overlaying multiple tracks.
+ \item[RGB-Float] Allocates a 32\,bit float for the R, G, and B
+channels and no alpha. This is used for high dynamic range
+processing with no transparency.
+ \item[RGBA-Float] This adds a 32\,bit float for alpha to
+RGB-Float. It is used for high dynamic range processing with
+transparency. Or when we don't want to lose data during workflow,
+for example in color correction, key extraction and motion
+tracking.
+ \item[YUV-8 bit] Allocates 8\,bits for Y, U, and V. This is used
+for low dynamic range operations in which the media is compressed in
+the YUV color space. Most compressed media is in YUV and this
+derivative allows video to be processed fast with the least color
+degradation.
+ \item[YUVA-8 bit] Allocates an alpha channel to the 8\,bit YUV
+colormodel for transparency.
+ \end{description}
+
+In order to do effects which involve alpha
+channels \index{alpha channel}, a colormodel with an alpha channel must be selected.
+These are RGBA-8 bit, YUVA-8 bit, and RGBA-Float. The 4 channel
+colormodels are slower than 3\,channel colormodels, with the slowest
+being RGBA-Float. Some effects, like fade, work around the need for
+alpha channels while other effects, like chromakey, require an alpha
+channel in order to be functional. So in order to get faster
+results, it is always a good idea to try the effect without alpha
+channels to see if it works before settling on an alpha channel and
+slowing it down.
+
+ When using compressed footage, YUV colormodels \index{yuv} are usually faster
+than RGB colormodels \index{RGB}. They also destroy fewer colors than RGB
+colormodels. If footage stored as JPEG or MPEG is processed many
+times in RGB, the colors will fade whereas they will not fade if
+processed in YUV\@. Years of working with high dynamic range footage
+has shown floating point RGB to be the best format for high dynamic
+range. 16 bit integers were used in the past and were too lossy and
+slow for the amount of improvement. RGB float does not destroy
+information when used with YUV source footage and also supports
+brightness above 100\,\%. Be aware that some effects, like
+Histogram, still clip above 100\,\% when in floating point. See also \ref{sec:color_space_range_playback}, \ref{sec:conform_the_project} and \ref{sec:overview_color_management}.
+
+\item[Interlace mode:] \index{interlacing} this is mostly obsolete in the modern digital
+age, but may be needed for older media such as that from broadcast
+TV\@. Interlacing uses two fields to create a frame. One field
+contains all odd-numbered lines in the image; the other contains all
+even-numbered lines. Interlaced fields are stored in alternating
+lines of interlaced source footage. The alternating lines missing on
+each output frame are interpolated.
+\end{description}
+
+\section{Best practice in pre-editing}%
+\label{sec:best_practice_pre_editing}
+
+\CGG{} supports the simultaneous presence in the Timeline of sources with different frame sizes and frame rates. However, audio/video synchronization problems may occur due to their different timing.\protect\footnote{credit to sge and Andrew Randrianasulu}
+Plugins that rely on the timing of each frame, for example \textit{Motion} and \textit{Interpolate} plugins, may have problems when used at the same time with engines which increase frame rate. Frame rate per definition cannot be increased without either duplicating some frames or generating them in some intelligent way. But to work reliably, the \textit{Motion} plugin requires access to all actual frames. These kinds of plugins (and also the rare cases of audio/video desync) explicitly require the \textit{Play every frame} option.
+
+There is no problem as long as the source fps, project fps, and destination fps are identical. In most cases, high frame rates such as 120 or 144 or any fps, will be just fine for \textit{Motion} provided that source footage all has the same frame rate.
+
+But when \textit{project} and \textit{source} frame rates are different (or \textit{project} and
+\textit{rendered} fps), then the \CGG{} engine has to either duplicate (interpolate) some frames or throw some away. Because of this, the audio tracks and the timeline get out of sync with such accelerated (or slowed down) video. And to make \textit{Motion} plugins reliably calculate interframe changes, you have to ensure the consistent frame numbers and frame properties.
+
+Generally, best practice is to perform the following sequence of preparations for video editing.
+
+\begin{enumerate}
+ \item Motion stabilization, and maybe some other preparations, to improve the quality of the source video is best done under the properties identical to the properties of the original video; it may be different codec, but same frame size and same frame rate. For stabilization you can use ffmpeg command line plugins called \textit{vidstabdetect} and \textit{vidstabtransform}.
+ \item To have a workflow at the highest quality it may be convenient to convert the sources into image sequences (e.g. OpenEXR). Especially if we want to exchange files with other Color or Compositimg programs that preferably use image sequences.
+ \item Uniform the color models. It is convenient to unify the color models of the sources because they would give different and inconsistent results with each other once displayed in the Compositor window.
+ \item If we intend to do some color correction or compositing with VFX, it is convenient to do some de-noising on the sources to make their pixels more homogeneous and suitable for post processing. De-noising is a heavy operation for the system so it may be convenient to do it in pre-editing.
+ \item If you need to alter the frame rate, for example because different source clips have different frame rates, then recode all the necessary clips to the same future project frame rate. Here frame sizes can still have different sizes, but frame rates should be all the same. If you need to change frame rate of some restricted part, particularly when smooth acceleration/deceleration is needed, it can be done in timeline. But if frame rate has to be changed only due to different source fps, it is better to do it during the preparation stage.
+\end{enumerate}
+
+\CGG{} does not have color management \index{color management}, but we can still give some general advice on how to set color spaces:
+
+\begin{enumerate}
+ \item Profiling and setting the monitor: \\
+ source: \textit{sRGB} $\rightarrow$ monitor: \textit{sRGB} (we get a correct color reproduction) \\
+ source: \textit{sRGB} $\rightarrow$ monitor: \textit{rec709} (we get slightly dark colors) \\
+ source: \textit{sRGB} $\rightarrow$ monitor: \textit{DCI-P3} (we get over-saturated colors) \\
+
+ source: \textit{rec709} $\rightarrow$ monitor: \textit{rec709} (we get a correct color reproduction) \\
+ source: \textit{rec709} $\rightarrow$ monitor: \textit{sRGB} (we get slightly faded colors) \\
+ source: \textit{rec709} $\rightarrow$ monitor: \textit{DCI-P3} (we get over-saturated colors)
+ \item It would be better to set the project as RGB(A)-FLOAT, allowing system performance, because it collects all available data and does not make rounding errors. If we can't afford it, starting from YUV type media it is better to set the project as YUV(A)8, so as not to have a darker rendering in the timeline. On the contrary, if we start from RGB signals, it is better to use RGB(A)8. If we don't display correctly on the timeline, we'll make adjustments from the wrong base (metamerism) and get false results.
+ \item Having correct color representation in the Compositor can be complicated. You can convert the imput \textit{YUV color range} to a new YUV color range that provides more correct results (i.e. MPEG to JPEG). The \texttt{Colorspace} plugin can be used for this conversion.
+ \item Among the rendering options always set the values \\
+ \texttt{color\_trc=...} (gamma correction) \\
+ \texttt{color\_primaries=...} (gamut) \\
+ \texttt{colorspace=...} (color spaces conversion, more depth-color); \\
+ or \\
+ \texttt{colormatrix=...} (color spaces conversion, faster).
+
+ These are only metadata that do not affect rendering but when the file is read by a player later they are used to reproduce the colors without errors.
+\end{enumerate}
+
+For more tips on how \CGG{} processes colors on the timeline see \nameref{sec:color_space_range_playback}, \nameref{sec:conform_the_project} and \nameref{sec:overview_color_management}.
+
+%%% Local Variables:
+%%% mode: latex
+%%% TeX-master: "../CinelerraGG_Manual"
+%%% End: