Last updated: Oct-30-2024
The Transformation URL API enables you to deliver media assets, including a large variety of on-the-fly transformations through the use of URL parameters. This reference provides comprehensive coverage of all available URL transformation parameters, including syntax, value details, and examples.
- Overview
- .<extension>
- a (angle)
- ac (audio codec)
- af (audio frequency)
- ar (aspect ratio)
- b (background)
- bl (baseline)
- bo (border)
- br (bitrate)
- c (crop/resize)
- co (color)
- cs (color space)
- d (default image)
- dl (delay)
- dn (density)
- dpr (DPR)
- du (duration)
- e (effect)
- eo (end offset)
- f (format)
- fl (flag)
- fn (custom function)
- fps (FPS)
- g (gravity)
- h (height)
- if (if condition)
- ki (keyframe interval)
- l (layer)
- o (opacity)
- p (prefix)
- pg (page or file layer)
- q (quality)
- r (round corners)
- so (start offset)
- sp (streaming profile)
- t (named transformation)
- u (underlay)
- vc (video codec)
- vs (video sampling)
- w (width)
- x, y (x & y coordinates)
- z (zoom)
- $ (variable)
Overview
The default Cloudinary asset delivery URL has the following structure:
This reference covers the parameters and corresponding options and values that can be used in the <transformations>
element of the URL. It also covers the <extension> element.
For information on other elements of the URL, see Transformation URL syntax.
The transformation names and syntax shown in this reference refer to the URL API.
Depending on the Cloudinary SDK you use, the names and syntax for the same transformation may be different. Therefore, all of the transformation examples in this reference also include the code for generating the example delivery URL from your chosen SDK.
The SDKs additionally provide a variety of helper methods to simplify the building of the transformation URL as well as other built-in capabilities. You can find more information about these in the relevant SDK guides.
Parameter types
There are two types of transformation parameters:
- Action parameters: Parameters that perform a specific transformation on the asset.
- Qualifier parameters: Parameters that do not perform an action on their own, but rather alter the default behavior or otherwise adjust the outcome of the corresponding action parameter.
See the Transformation Guide for additional guidelines and best practices regarding parameter types.
.<extension>
Although not a transformation parameter belonging to the <transformation>
element of the URL, the extension of the URL can transform the format of the delivered asset, in the same way as f_<supported format>.
If f_<supported format> or f_<auto> are not specified in the URL, the format is determined by the extension. If no format or extension is specified, then the asset is delivered in its originally uploaded format.
- If using an SDK to generate your URL, you can control the extension using the
format
parameter, or by adding the extension to the public ID. - If using a raw transformation, for example to define an eager or named transformation, you can specify the extension at the end of the transformation parameters, following a forward slash. For example,
c_pad,h_300,w_300/jpg
means that the delivery URL has transformation parameters ofc_pad,h_300,w_300
and a.jpg
extension.c_pad,h_300,w_300/
represents the same transformation parameters, but with no extension.
Syntax details
Examples
a (angle)
Rotates or flips an asset by the specified number of degrees or automatically according to its orientation or available metadata. Multiple modes can be applied by concatenating their values with a dot.
Learn more: Rotating images | Rotating videos
<degrees>
a_<degrees>
Rotates an asset by the specified angle.
See also: Arithmetic expressions
Syntax details
Examples
<mode>
a_<mode>
Rotates an image or video based on the specified mode.
Use with: To apply one of the a_auto
modes, use it as a qualifier with a cropping action that adjusts the aspect ratio, as per the syntax details and example below.
Syntax details
Examples
ac (audio codec)
af (audio frequency)
af_<frequency value>
Controls the audio sampling frequency.
As a qualifier, can be used to preserve the original frequency, overriding the default frequency behavior of vc_auto
.
As a qualifier, use with: vc_auto
Learn more: Audio frequency control
Syntax details
Example
ar (aspect ratio)
ar_<ratio value>
A qualifier that crops or resizes the asset to a new aspect ratio, for use with a crop/resize mode that determines how the asset is adjusted to the new dimensions.
Use with: c (crop/resize)
Learn more: Setting the resize dimensions
See also: h (height) | w (width) | Arithmetic expressions
Syntax details
Examples
b (background)
Applies a background to empty or transparent areas.
<color value>
b_<color value>
Applies the specified background color on transparent background areas in an image.
Can also be used as a qualifier to override the default background color for padded cropping of images and videos, text overlays and generated waveform images.
As a qualifier, use with: c_auto_pad - image only | c_fill_pad - image only | c_lpad | c_mpad - image only | c_pad | l_subtitles | l_text | fl_waveform
Learn more: Background color for images | Background color for videos
Syntax details
Examples
auto
b_auto[:<mode>][:<number>][:<direction>][:palette_<color 1>[_..._<color n>]]
A qualifier that automatically selects the background color based on one or more predominant colors in the image, for use with one of the padding crop mode transformations.
Learn more: Content-aware padding
Use with: c_auto_pad - image only | c_pad | c_lpad | c_mpad | c_fill_pad - image only
Syntax details
Examples
blurred
b_blurred[:<intensity>][:<brightness>]
A qualifier that generates a blurred version of the same video to use as the background with the corresponding padded cropping transformation.
Learn more: Pad with blurred video background
Syntax details
Example
gen_fill
b_gen_fill[:prompt_<prompt>][;seed_<seed>]
A qualifier that automatically fills the padded area using generative AI to extend the image seamlessly. Optionally include a prompt to guide the image generation.
Using different seeds, you can regenerate the image if you're not happy with the result. You can also use seeds to return a previously generated result, as long as any other preceding transformation parameters are the same.
- Generative fill can only be used on non-transparent images.
- There is a special transformation count for generative fill.
- Generative fill is not supported for animated images, fetched images or incoming transformations.
- If you get blurred results when using this feature, it is likely that the built-in NSFW (Not Safe For Work) check has detected something inappropriate. You can contact support to disable this check if you believe it is too sensitive.
- Initial transformation requests may result in a 423 error response while the transformation is being processed. You can prepare derived versions in advance using an eager transformation.
Learn more: Generative fill
Use with: c_auto_pad | c_pad | c_lpad | c_mpad | c_fill_pad
Syntax details
Examples
bl (baseline)
bl_<named transformation>
Establishes a baseline transformation from a named transformation. The baseline transformation is cached, so when re-used with other transformation parameters, the baseline part of the transformation does not have to be regenerated, saving processing time and cost.
- You can combine the baseline transformation with other transformation parameters, but it must be the first component in the chain and the only transformation parameter in that component.
- You must specify a supported format transformation (
f_
) in the named transformation. - Consider using
f_jxl/q_100
in the baseline transformation to prevent images suffering from loss due to double lossy encoding. - You cannot use automatic format (
f_auto
) in the named transformation, although this can be used in a subsequent component. - If the named transformation contains variables, the variables must be defined within the named transformation.
- The baseline transformation is not supported for fetched media or incoming transformations.
Syntax details
Examples
bo (border)
bo_<width>_<style>_<color>
Adds a solid border around an image or video.
As a qualifier, adds a border to an overlay.
Use with: l_<image id> | l_fetch | l_subtitles | l_text | l_video
Learn more: Adding borders
Syntax details
Examples
br (bitrate)
br_<bitrate value>[:constant]
Controls the bitrate for audio or video files in bits per second. By default, a variable bitrate (VBR) is used, with this value indicating the maximum bitrate.
Supported for video codecs: h264, h265 (MPEG-4); vp8, vp9 (WebM)
Supported for audio codecs: aac, mp3, vorbis
Learn more: Bitrate control
Syntax details
Examples
c (crop/resize)
Changes the size of the delivered asset according to the requested width & height dimensions.
Depending on the selected <crop mode>
, parts of the original asset may be cropped out and/or the asset may be resized (scaled up or down).
When using any of the modes that can potentially crop parts of the asset, the selected gravity parameter controls which part of the original asset is kept in the resulting delivered file.
Learn more: Resizing and cropping images | Resizing and cropping videos
auto
c_auto
Automatically determines the best crop based on the gravity and specified dimensions.
If the requested dimensions are smaller than the best crop, the result is downscaled. If the requested dimensions are larger than the original image, the result is upscaled. Use this mode in conjunction with the g (gravity) parameter.
Required qualifiers
And
Two of the following: w (width) | h (height) | ar (aspect ratio)
(In rare cases, you may choose to provide only one sizing qualifier)
Learn more: Automatic gravity with the automatic cropping mode
Example
auto_pad
c_auto_pad
Tries to prevent a "bad crop" by first attempting to use the auto
cropping mode, but adding some padding if the algorithm determines that more of the original image needs to be included in the final image. Especially useful if the aspect ratio of the delivered asset is considerably different from the original's aspect ratio. Supported only in conjunction with g_auto.
Required qualifiers
And
Two of the following: w (width) | h (height) | ar (aspect ratio)
(In rare cases, you may choose to provide only one sizing qualifier)
Optional qualifiers
Example
crop
c_crop
Extracts the specified size from the original image without distorting or scaling the delivered asset.
By default, the center of the image is kept (extracted) and the top/bottom and/or side edges are evenly cropped to achieve the requested dimensions. You can specify the gravity qualifier to control which part of the image to keep, either as a compass direction (such as south
or north_east
), one of the special gravity positions (such as faces
or ocr_text
), AI-based automatic region detection or AI-based object detection.
You can also specify a specific region of the original image to keep by specifying x and y qualifiers together with w (width) and h (height) qualifiers to define an exact bounding box. When using this method, and no gravity is specified, the x
and y
coordinates are relative to the top-left (north-west) corner of the original asset. You can also use percentage based numbers instead of the exact coordinates for x
, y
, w
and h
(e.g., 0.5 for 50%). Use this method only when you already have the required absolute cropping coordinates. For example, you might use this if your application allows a user to upload user-generated content, and your application allows the user to manually select a region to crop from the original image, and you pass those coordinates to build the crop URL.
Required qualifiers
Two of the following: w (width) | h (height) | ar (aspect ratio)
(In rare cases, you may choose to provide only one sizing qualifier)
Optional qualifiers
g (gravity) | x (x-coordinate) | y (y-coordinate)
Example
fill
c_fill
Creates an asset with the exact specified width and height without distorting the asset. This option first scales as much as needed to at least fill both of the specified dimensions. If the requested aspect ratio is different than the original, cropping will occur on the dimension that exceeds the requested size after scaling. You can specify which part of the original asset you want to keep if cropping occurs using the gravity (set to 'center' by default).
Required qualifiers
At least one of the following: w (width) | h (height) | ar (aspect ratio)
(In rare cases, you may choose to provide only one sizing qualifier)
Optional qualifiers
Examples
fill_pad
c_fill_pad
Tries to prevent a "bad crop" by first attempting to use the fill
mode, but adding some padding if the algorithm determines that more of the original image needs to be included in the final image, or if more content in specific frames in a video should be shown. Especially useful if the aspect ratio of the delivered asset is considerably different from the original's aspect ratio. Supported only in conjunction with g_auto.
Required qualifiers
And
Two of the following: w (width) | h (height) | ar (aspect ratio)
(In rare cases, you may choose to provide only one sizing qualifier)
Optional qualifiers
Example
fit
c_fit
Scales the asset up or down so that it takes up as much space as possible within a bounding box defined by the specified dimension parameters without cropping any of it. The original aspect ratio is retained and all of the original image is visible.
Required qualifiers
Two of the following: w (width) | h (height) | ar (aspect ratio)
(In rare cases, you may choose to provide only one sizing qualifier)
Example
imagga_crop
c_imagga_crop
Requires the Imagga Crop and Scale add-on.
The Imagga Crop and Scale add-on can be used to smartly crop your images based on areas of interest within each specific photo as automatically calculated by the Imagga algorithm.
Required qualifiers
At least one of the following: w (width) | h (height)
Optional qualifiers
Example
imagga_scale
c_imagga_scale
Requires the Imagga Crop and Scale add-on.
The Imagga Crop and Scale add-on can be used to smartly scale your images based on automatically calculated areas of interest within each specific photo.
Required qualifiers
At least one of the following: w (width) | h (height) | ar (aspect ratio)
(In rare cases, you may choose to provide only one sizing qualifier)
Example
lfill
c_lfill
The lfill
(limit fill) mode is the same as fill but only if the original image is larger than the specified resolution limits, in which case the image is scaled down to fill the specified width and height without distorting the image, and then the dimension that exceeds the request is cropped. If the original dimensions are smaller than the requested size, it is not resized at all. This prevents upscaling. You can specify which part of the original image you want to keep if cropping occurs using the gravity parameter (set to center
by default).
Required qualifiers
Two of the following: w (width) | h (height) | ar (aspect ratio)
(In rare cases, you may choose to provide only one sizing qualifier)
Optional qualifiers
Example
limit
c_limit
Same as the fit mode but only if the original asset is larger than the specified limit (width and height), in which case the asset is scaled down so that it takes up as much space as possible within a bounding box defined by the specified width and height parameters. The original aspect ratio is retained (by default) and all of the original asset is visible. This mode doesn't scale up the asset if your requested dimensions are larger than the original image size.
Required qualifiers
Two of the following: w (width) | h (height) | ar (aspect ratio)
(In rare cases, you may choose to provide only one sizing qualifier)
Example
lpad
c_lpad
The lpad
(limit pad) mode is the same as pad but only if the original asset is larger than the specified limit (width and height), in which case the asset is scaled down to fill the specified width and height while retaining the original aspect ratio (by default) and with all of the original asset visible. This mode doesn't scale up the asset if your requested dimensions are bigger than the original asset size. Instead, if the proportions of the original asset do not match the requested width and height, padding is added to the asset to reach the required size. You can also specify where the original asset is placed by using the gravity parameter (set to center
by default). Additionally, you can specify the color of the background in the case that padding is added.
Required qualifiers
Two of the following: w (width) | h (height) | ar (aspect ratio)
(In rare cases, you may choose to provide only one sizing qualifier)
Optional qualifiers
g_<gravity position> | b (background)
Example
mfit
c_mfit
The mfit
(minimum fit) mode is the same as fit but only if the original image is smaller than the specified minimum (width and height), in which case the image is scaled up so that it takes up as much space as possible within a bounding box defined by the specified width and height parameters. The original aspect ratio is retained (by default) and all of the original image is visible. This mode doesn't scale down the image if your requested dimensions are smaller than the original image's.
Required qualifiers
Two of the following: w (width) | h (height) | ar (aspect ratio)
(In rare cases, you may choose to provide only one sizing qualifier)
Example
mpad
c_mpad
The mpad
(minimum pad) mode is the same as pad but only if the original image is smaller than the specified minimum (width and height), in which case the image is unchanged but padding is added to fill the specified dimensions. This mode doesn't scale down the image if your requested dimensions are smaller than the original image's. You can also specify where the original image is placed by using the gravity parameter (set to center
by default). Additionally, you can specify the color of the background in the case that padding is added.
Required qualifiers
Two of the following: w (width) | h (height) | ar (aspect ratio)
(In rare cases, you may choose to provide only one sizing qualifier)
Optional qualifiers
g_<gravity position> | b (background)
Examples
pad
c_pad
Resizes the asset to fill the specified width and height while retaining the original aspect ratio (by default) and with all of the original asset visible. If the proportions of the original asset do not match the specified width and height, padding is added to the asset to reach the required size. You can also specify where the original asset is placed using the gravity parameter (set to center
by default). Additionally, you can specify the color of the background in the case that padding is added.
Required qualifiers
Two of the following: w (width) | h (height) | ar (aspect ratio)
(In rare cases, you may choose to provide only one sizing qualifier)
Optional qualifiers
g_<gravity position> | b (background)
Example
scale
c_scale
Resizes the asset exactly to the specified width and height. All original asset parts are visible, but might be stretched or shrunk if the dimensions you request have a different aspect ratio than the original.
If only width or only height is specified, then the asset is scaled to the new dimension while retaining the original aspect ratio (unless you also include the fl_ignore_aspect_ratio flag).
Required qualifiers
At least one of the following: w (width) | h (height) | ar (aspect ratio)
Optional qualifiers
fl_ignore_aspect_ratio | g_liquid
See also: Liquid rescaling
Examples
thumb
c_thumb
Creates image thumbnails based on a gravity position. Must always be accompanied by the g (gravity) parameter. This cropping mode generates a thumbnail of an image with the exact specified width and height dimensions and with the original proportions retained, but the resulting image might be scaled to fit in the specified dimensions. You can specify the z (zoom) parameter to determine how much to scale the resulting image within the specified width and height.
Required qualifiers
And
Two of the following: w (width) | h (height) | ar (aspect ratio)
(In rare cases, you may choose to provide only one sizing qualifier)
Optional qualifiers
Example
co (color)
co_<color value>
A qualifier that specifies the color to use with the corresponding transformation.
Use with: e_colorize | e_outline | e_make_transparent | e_shadow | l_text | l_subtitles | fl_waveform
Syntax details
Examples
cs (color space)
cs_<color space mode>
Controls the color space used for the delivered image or video.
If you don't include this parameter in your transformation, the color space of the original asset is generally retained. In some cases for videos, the color space is normalized for web delivery, unless cs_copy
is specified.
Syntax details
Examples
d (default image)
d_<image asset>
Specifies a backup placeholder image to be delivered in the case that the actual requested delivery image or social media picture does not exist. Any requested transformations are applied on the placeholder image as well.
Learn more: Using a default image placeholder
Syntax details
Example
dl (delay)
dl_<time value>
Controls the time delay between the frames of a delivered animated image. (The source asset can be an image or a video.)
Related flag: fl_animated
Syntax details
Example
dn (density)
dn_<dots per inch>
Controls the density to use when delivering an image or when converting a vector file such as a PDF or EPS document to a web image delivery format.
-
For web image formats: By default, if an image does not contain resolution information in its embedded metadata, Cloudinary normalizes any derived images for web optimization purposes and delivers them at 150 dpi. Controlling the dpi can be useful when generating a derived image intended for printing.
TipYou can take advantage of the idn (initial density) value to automatically set the density of your image to the (pre-normalized) initial density of the original image (for example,dn_idn
). This value is taken from the original image's metadata. For vector files (PDF, EPS, etc.): When you deliver a vector file in a web image format, it is delivered by default at 150 dpi.
See also: Arithmetic expressions
Learn more: Deliver a PDF page as an image
Syntax details
Example
dpr (DPR)
Sets the device pixel ratio (DPR) for the delivered image or video using a specified value or automatically based on the requesting device.
<pixel ratio>
dpr_<pixel ratio>
Delivers the image or video in the specified device pixel ratio.
1
, ensure that you also set the desired final display dimensions in your image or video tag. For example, if you set c_scale,h_300/dpr_2.0
in your delivery URL, you should also set height=300
in your image tag. Otherwise, the image will be delivered at 2.0 x the requested dimensions (a height of 600px in this example).Learn more: Set Device Pixel Ratio (DPR)
See also: Arithmetic expressions
Syntax details
Example
auto
dpr_auto
Delivers the image in a resolution that automatically matches the DPR (Device Pixel Ratio) setting of the requesting device, rounded up to the nearest integer. Only works for certain browsers and when Client-Hints are enabled.
Learn more: Automatic DPR
Example
du (duration)
du_<time value>
Sets the duration (in seconds) of a video or audio clip.
- Can be used independently to trim a video or audio clip to the specified length. This parameter is often used in conjunction with the so (start offset) and/or eo (end offset) parameters.
- Can be used as a qualifier to control the length of time for a corresponding transformation.
As a qualifier, use with: e_boomerang | l_audio | l_<image id> | l_video
Syntax details
Examples
e (effect)
Applies the specified effect to an asset.
If you specify more than one effect in a transformation component (separated by commas), only the last effect in that component is applied.
To combine effects, use separate components (separated by forward slashes) following best practice guidelines, which recommend including only one action parameter per component.
accelerate
e_accelerate[:<acceleration percentage>]
Speeds up the video playback by the specified percentage.
Syntax details
Example
adv_redeye
e_adv_redeye
Requires the Advanced Facial Attribute Detection add-on.
Automatically removes red eyes from an image.
Example
anti_removal
e_anti_removal[:<distortion level>]
A qualifier that slightly distorts the corresponding image overlay to prevent easy removal.
Use with: l_<image id> | l_fetch | l_text | u (underlay)
Learn more: Smart anti-removal
Syntax details
Example
art
e_art:<filter>
Applies the selected artistic filter.
Learn more: Artistic filter effects
Syntax details
Example
auto_brightness
e_auto_brightness[:<blend percentage>]
Automatically adjusts the image brightness and blends the result with the original image.
Syntax details
Example
auto_color
e_auto_color[:<blend percentage>]
Automatically adjusts the image color balance and blends the result with the original image.
Syntax details
Example
auto_contrast
e_auto_contrast[:<blend percentage>]
Automatically adjusts the image contrast and blends the result with the original image.
Syntax details
Example
assist_colorblind
e_assist_colorblind[:<assist type>]
Applies stripes or color adjustment to help people with common color blind conditions to differentiate between colors that are similar for them.
Learn more: Blog post
Syntax details
Examples
background_removal
e_background_removal[:fineedges_<enable fine edges>]
Uses the Cloudinary AI Background Removal add-on to make the background of an image transparent.
Notes:
- You can combine the
background_removal
effect with other transformation parameters, but thebackground_removal
effect must be the first component in the chain. - With no other action parameters in the same component (as per our best practice guidelines), the background-removed version is saved so that when used for other derived versions of the background-removed asset, the add-on is not called again for that asset.
- The first time the add-on is called for an asset, a 423 error response is returned until the processing has completed.
- The add-on imposes a limit of 4,194,304 (2048 x 2048) total pixels on its input images. If an image exceeds this limit, the add-on first scales down the image to fit the limit, and then processes it. The scaling does not affect the aspect ratio of the image, but it does alter its output dimensions.
- Background removal on the fly cannot currently be used for image overlays. Instead, apply the base image as an underlay.
- Background removal on the fly is not supported for fetched images.
- Background removal on the fly is not supported for incoming transformations. If you need this use case, you can remove the background on upload.
Tips:
- This transformation generally gives better results than the e_bgremoval and e_make_transparent effects.
- It works well for foreground objects with fine edges and lets you specify certain items that you expect to see as foreground objects.
- You can also try the Pixelz Remove the Background add-on for professional manual background removal.
Learn more: On-the-fly background removal
Syntax details
Examples
bgremoval
e_bgremoval[:screen][:<color to remove>]
Makes the background of an image transparent (or solid white for JPGs). Use when the background is a uniform color.
- If the background is not uniform, you can also try the e_make_transparent effect.
- If neither
e_bgremoval
nore_make_transparent
give the desired result, it's recommended to try the e_background_removal effect, which uses the Cloudinary AI Background Removal add-on. - You can also try the Pixelz Remove the Background add-on for professional manual background removal.
Syntax details
Examples
blackwhite
e_blackwhite[:<threshold>]
Converts an image to black and white.
Syntax details
Examples
blue
blur
blur_faces
blur_region
e_blur_region[:<strength>]
Applies a blurring filter to the region of an image specified by x, y, width and height, or an area of text. If no region is specified, the whole image is blurred.
Optional qualifiers
x, y (x & y coordinates) | w (width) | h (height) | g_ocr_text
Syntax details
Examples
boomerang
e_boomerang
Causes a video clip to play forwards and then backwards.
Use in conjunction with trimming parameters (duration, start_offset, or end_offset and the loop effect to deliver a classic (short, repeating) boomerang clip.
Learn more: Create a boomerang video clip
Example
brightness
brightness_hsb
e_brightness_hsb[:<level>]
Adjusts image brightness modulation in HSB to prevent artifacts in some images.
Syntax details
Example
camera
e_camera[[:up_<vertical position>][;right_<horizontal position>][;zoom_<zoom amount>][;env_<environment>][;exposure_<exposure amount>][;frames_<number of frames>]]
A qualifier that lets you customize a 2D image captured from a 3D model, as if a photo is being taken by a camera.
The camera always points towards the center of the 3D model and can be rotated around it. Specify the position of the camera, the exposure, zoom and lighting to capture your perfect shot.
Use with fl_animated to create a 360 spinning animation.
Use with: f (format)
Learn more: Generating an image from a 3D model
See also: e_light
Syntax details
Examples
cartoonify
e_cartoonify[:<line strength>][:<color reduction>]
Applies a cartoon effect to an image.
Syntax details
Examples
colorize
e_colorize[:<level>]
Colorizes an image. By default, gray is used for colorization. You can specify a different color using the color qualifier.
Optional qualifier
Syntax details
Examples
contrast
e_contrast[:level_<level>][;type_<function type>]
Adjusts an image or video contrast.
Syntax details
Examples
cut_out
e_cut_out
Trims pixels according to the transparency levels of a specified overlay image. Wherever an overlay image is transparent, the original is shown, and wherever an overlay is opaque, the resulting image is transparent.
Required qualifiers
Learn more: Shape cutouts: remove a shape
Example
deshake
e_deshake[:<pixels>]
Removes small motion shifts from a video. Useful for non-professional (user-generated content) videos.
Syntax details
Example
displace
e_displace
Displaces the pixels in an image according to the color channels of the pixels in another specified image (a gradient map specified with the overlay parameter).
Required qualifiers
At least one of the following: x, y (x & y coordinates)
x
and y
must be between -999 and 999.Learn more: Displacement maps
Example
distort
Distorts an image to a new shape by either adjusting its corners or by warping it into an arc.
e_distort:<x1>:<y1>:<x2>:<y2>:<x3>:<y3>:<x4>:<y4>
Distorts an image, or text overlay, to a new shape by adjusting its corners to achieve perception warping.
Learn more: Image shape changes and distortion effects
Syntax details
Example
e_distort:arc:<degrees>
Distorts an image, or text overlay, to an arc shape.
Learn more: Image shape changes and distortion effects
Syntax details
Example
dropshadow
e_dropshadow[:azimuth_<azimuth>][;elevation_<elevation>][;spread_<spread>]
Adds a shadow to the object(s) in an image. Specify the angle and spread of the light source causing the shadow.
- Either:
- the original image must include transparency, for example where the background has already been removed and it has been stored in a format that supports transparency, such as PNG, or
- the
dropshadow
effect must be chained after the background_removal effect, for example:
- The dropshadow effect is not supported for animated images, fetched images or incoming transformations.
Learn more: Dropshadow effect
See also: e_shadow
Syntax details
Example
enhance
e_enhance
Uses AI to analyze an image and make adjustments to enhance the appeal of the image, such as:
- Exposure reduction: Correcting overexposed images, smartly reducing excessive brightness and reclaiming details in bright areas, bringing back a balanced exposure.
- Exposure enhancement: Adjusting underexposed images by enhancing dim areas, thus improving overall exposure without compromising the image's natural quality.
- Color intensification: Enriching color vividness, making hues more vibrant and lively, thus bringing a more dynamic color range to the image.
- Color temperature correction: Adjusting the white balance, correcting color casts and ensuring that the colors in the image accurately reflect their real-world appearance.
Consider also using generative restore to revitalize poor quality images, or the improve effect to automatically adjust color, contrast and brightness. See this comparison of image enhancement options.
- During processing, large images are downscaled to a maximum of 4096 x 4096 pixels, then upscaled back to their original size, which may affect quality.
- There is a special transformation count for the enhance effect.
- The enhance effect is not supported for fetched images or incoming transformations.
See also: e_improve | e_gen_restore
Example
extract
e_extract:prompt_(<prompt 1>[;...;<prompt n>])[;multiple_<detect multiple>][;mode_<mode>][;invert_<invert>]
Extracts an area or multiple areas of an image, described in natural language. You can choose to keep the content of the extracted area(s) and make the rest of the image transparent (like background removal), or make the extracted area(s) transparent, keeping the content of the rest of the image. Alternatively, you can make a grayscale mask of the extracted area(s) or everything excluding the extracted area(s), which you can use with other transformations such as e_mask, e_multiply, e_overlay and e_screen.
- During processing, large images are downscaled to a maximum of 2048 x 2048 pixels, then upscaled back to their original size, which may affect quality.
- When you specify more than one prompt, all the objects specified in each of the prompts will be extrracted whether or not
multiple_true
is specified in the URL. - There is a special transformation count for the extract effect.
- The extract effect is not supported for animated images, fetched images or incoming transformations.
- User-defined variables cannot be used for the prompt when more than one prompt is specified.
- Initial transformation requests may result in a 423 error response while the transformation is being processed. You can prepare derived versions in advance using an eager transformation.
See also: e_background_removal
Learn more: Shape cutouts: use AI to determine what to remove or keep in an image
Syntax details
Examples
fade
e_fade[:<duration>]
Fades into, or out of, an animated GIF or video. You can chain fade effects to both fade into and out of the media.
Learn more: Fade in and out
Syntax details
Example
fill_light
e_fill_light[:<blend>][:<bias>]
Adjusts the fill light and optionally blends the result with the original image.
Syntax details
Example
gamma
gen_background_replace
e_gen_background_replace[:prompt_<prompt>][;seed_<seed>]
Replaces the background of an image with an AI-generated background. If no prompt is specified, the background is based on the contents of the image. Otherwise, the background is based on the natural language prompt specified.
For images with transparency, the generated background replaces the transparent area. For images without transparency, the effect first determines the foreground elements and leaves those areas intact, while replacing the background.
Using different seeds, you can regenerate a background if you're not happy with the result. You can also use seeds to return a previously generated result, as long as any other preceding transformation parameters are the same.
- The use of generative AI means that results may not be 100% accurate.
- There is a special transformation count for the generative background replace effect.
- If you get blurred results when using this feature, it is likely that the built-in NSFW (Not Safe For Work) check has detected something inappropriate. You can contact support to disable this check if you believe it is too sensitive.
- The generative background replace effect is not supported for animated images, fetched images or incoming transformations.
- Initial transformation requests may result in a 423 error response while the transformation is being processed. You can prepare derived versions in advance using an eager transformation.
Learn more: Generative background replace
Syntax details
Examples
gen_recolor
e_gen_recolor:prompt_(<prompt 1>[;...;<prompt n>]);to-color_<color>[;multiple_<detect multiple>]
Uses generative AI to recolor parts of your image, maintaining the relative shading. Specify one or more prompts and the color to change them to. Use the multiple
parameter to replace the color of all instances of the prompt when one prompt is given.
- The generative recolor effect can only be used on non-transparent images.
- The use of generative AI means that results may not be 100% accurate.
- The generative recolor effect works best on simple objects that are clearly visible.
- Very small objects and very large objects may not be detected.
- During processing, large images are downscaled to a maximum of 2048 x 2048 pixels, then upscaled back to their original size, which may affect quality.
- When you specify more than one prompt, all the objects specified in each of the prompts will be recolored whether or not
multiple_true
is specified in the URL. - There is a special transformation count for the generative recolor effect.
- The generative recolor effect is not supported for animated images, fetched images or incoming transformations.
- User-defined variables cannot be used for the prompt when more than one prompt is specified.
- Initial transformation requests may result in a 423 error response while the transformation is being processed. You can prepare derived versions in advance using an eager transformation.
Learn more: Generative recolor
See also: e_replace_color
Syntax details
Examples
gen_remove
e_gen_remove[:prompt_(<prompt 1>[;...;<prompt n>])][;multiple_<detect multiple>][;remove-shadow_<remove shadow>]][:region_((x_<x coordinate 1>;y_<y coordinate 1>;w_<width 1>;h_<height 1>)[;...;(x_<x coordinate n>;y_<y coordinate n>;w_<width n>;h_<height n>)])]
Uses generative AI to remove unwanted parts of your image, replacing the area with realistic pixels. Specify either one or more prompts or one or more regions. Use the multiple
parameter to remove all instances of the prompt when one prompt is given.
By default, shadows cast by removed objects are not removed. If you want to remove the shadow, when specifying a prompt you can set the remove-shadow
parameter to true
.
- The generative remove effect can only be used on non-transparent images.
- The use of generative AI means that results may not be 100% accurate.
- The generative remove effect works best on simple objects that are clearly visible.
- Very small objects and very large objects may not be detected.
- Do not attempt to remove faces or hands.
- During processing, large images are downscaled to a maximum of 6140 x 6140 pixels, then upscaled back to their original size, which may affect quality.
- When you specify more than one prompt, all the objects specified in each of the prompts will be removed whether or not
multiple_true
is specified in the URL. - There is a special transformation count for the generative remove effect.
- The generative remove effect is not supported for animated images, fetched images or incoming transformations.
- User-defined variables cannot be used for the prompt when more than one prompt is specified.
- Initial transformation requests may result in a 423 error response while the transformation is being processed. You can prepare derived versions in advance using an eager transformation.
Learn more: Generative remove
Syntax details
Examples
gen_replace
e_gen_replace:from_<from prompt>;to_<to prompt>[;preserve-geometry_<preserve geometry>][;multiple_<detect multiple>]
Uses generative AI to replace parts of your image with something else. Use the preserve-geometry
parameter to fill exactly the same shape with the replacement.
- The generative replace effect can only be used on non-transparent images.
- The use of generative AI means that results may not be 100% accurate.
- The generative replace effect works best on simple objects that are clearly visible.
- Very small objects and very large objects may not be detected.
- Do not attempt to replace faces, hands or text.
- During processing, large images are downscaled to a maximum of 2048 x 2048 pixels, then upscaled back to their original size, which may affect quality.
- There is a special transformation count for the generative replace effect.
- The generative replace effect is not supported for animated images, fetched images or incoming transformations.
- Initial transformation requests may result in a 423 error response while the transformation is being processed. You can prepare derived versions in advance using an eager transformation.
Learn more: Generative replace
Syntax details
Examples
gen_restore
e_gen_restore
Uses generative AI to restore details in poor quality images or images that may have become degraded through repeated processing and compression.
Consider also using the improve effect to automatically adjust color, contrast and brightness, or the enhance effect to improve the appeal of an image based on AI analysis. See this comparison of image enhancement options.
- The generative restore effect can only be used on non-transparent images.
- The use of generative AI means that results may not be 100% accurate.
- There is a special transformation count for the generative restore effect.
- The generative restore effect is not supported for animated images, fetched images or incoming transformations.
- Initial transformation requests may result in a 423 error response while the transformation is being processed. You can prepare derived versions in advance using an eager transformation.
See also: e_enhance | e_improve
Learn more: Generative restore
Example
gradient_fade
e_gradient_fade[:<type>][:<strength>]
Applies a gradient fade effect from the edge of an image. Use x or y to indicate from which edge to fade and how much of the image should be faded. Values of x and y can be specified as a percentage (range: 0.0 to 1.0
), or in pixels (integer values). Positive values fade from the top (y) or left (x). Negative values fade from the bottom (y) or right (x). By default, the gradient is applied to the top 50% of the image (y_0.5
).
Optional qualifiers
Syntax details
Examples
grayscale
green
hue
improve
e_improve[:<mode>][:<blend>]
Adjusts an image's colors, contrast and brightness to improve its appearance.
Consider also using generative restore to revitalize poor quality images, or the enhance effect to improve the appeal of an image based on AI analysis. See this comparison of image enhancement options.
See also: e_enhance | e_gen_restore
Syntax details
Examples
isolate
e_isolate[:prompt_(<prompt 1>[;...;<prompt n>])][;multiple_<detect multiple>][;mode_<mode>][;invert_<invert>]
Isolates an area or multiple areas of an image, described in natural language. You can choose to keep the content of the isolated area(s) and make the rest of the image transparent (like background removal), or make the isolated area(s) transparent, keeping the content of the rest of the image. Alternatively, you can make a grayscale mask of the isolated area(s) or everything excluding the isolated area(s), which you can use with other transformations such as e_mask, e_multiply, e_overlay and e_screen.
- During processing, large images are downscaled to a maximum of 2048 x 2048 pixels, then upscaled back to their original size, which may affect quality.
- When you specify more than one prompt, all the objects specified in each of the prompts will be isolated whether or not
multiple_true
is specified in the URL. - There is a special transformation count for the isolate effect.
- The isolate effect is not supported for animated images, fetched images or incoming transformations.
- User-defined variables cannot be used for the prompt when more than one prompt is specified.
- Initial transformation requests may result in a 423 error response while the transformation is being processed. You can prepare derived versions in advance using an eager transformation.
See also: e_background_removal
Learn more: Shape cutouts: Use AI to determine what to remove or keep in an image
Syntax details
Examples
light
e_light[:shadowintensity_<intensity>]
When generating a 2D image from a 3D model, this effect introduces a light source to cast a shadow. You can control the intensity of the shadow that's cast.
Use with: f (format) | e_camera
Learn more: Generating an image from a 3D model
Syntax details
Examples
loop
e_loop[:<additional iterations>]
Loops a video or animated image the specified number of times.
Syntax details
Example
make_transparent
e_make_transparent[:<tolerance>]
Makes the background of an image or video transparent (or solid white for formats that do not support transparency). The background is determined as all pixels that resemble the pixels on the edges of an image or video, or the color specified by the color
qualifier.
- For images with a uniform background, you may also want to try the e_bgremoval effect.
- If neither
e_make_transparent
nore_bgremoval
give the desired result, it's recommended to try the e_background_removal effect, which uses the Cloudinary AI Background Removal add-on. - You can also try the Pixelz Remove the Background add-on for professional manual background removal.
Optional qualifier
Learn more: Apply video transparency
Syntax details
Examples
mask
e_mask
A qualifier that uses an image layer as a mask to hide, reveal, or change the opacity of the layer below it.
Use with: l_<image id> | l_fetch | l_text | u (underlay)
Example
morphology
e_morphology[:method_<method>][;iterations_<iterations>][;kernel_<kernel>][;radius_<radius>]
Applies kernels of various sizes and shapes to an image using different methods to achieve effects such as image blurring and sharpening.
Syntax details
Examples
multiply
e_multiply
A qualifier that blends image layers using the multiply blend mode, whereby the RGB channel numbers for each pixel from the top layer are multiplied by the values for the corresponding pixel from the bottom layer. The result is always a darker picture; since each value is less than 1, their product will be less than either of the initial values.
Use with: l_<image id> | l_fetch | l_text | u (underlay)
See also: Other blend modes: e_mask | e_overlay | e_screen
Example
negate
noise
e_noise[:<level>]
Adds visual noise to the video, visible as a random flicker of "dots" or "snow".
Learn more: Add visual noise
Syntax details
Example
oil_paint
opacity_threshold
e_opacity_threshold[:<level>]
Causes all semi-transparent pixels in an image to be either fully transparent or fully opaque. Specifically, each pixel with an opacity lower than the specified threshold level is set to an opacity of 0% (transparent). Each pixel with an opacity greater than or equal to the specified level is set to an opacity of 100% (opaque).
Syntax details
Example
ordered_dither
outline
e_outline[:<mode>][:<width>][:<blur>]
Adds an outline effect to an image. Specify the color of the outline using the co (color) qualifier. If no color is specified, the default outline is black.
Optional qualifier
Learn more: Outline effects
Syntax details
Example
overlay
e_overlay
A qualifier that blends image layers using the overlay blend mode, which combines the multiply and screen blend modes. The parts of the top layer where the base layer is light become lighter, the parts where the base layer is dark become darker. Areas where the top layer are mid gray are unaffected.
Use with: l_<image id> | l_fetch | l_text | u (underlay)
See also: Other blend modes: e_mask | e_multiply | e_screen
Example
pixelate
pixelate_faces
pixelate_region
e_pixelate_region[:<square size>]
Pixelates the region of an image specified by x, y, width and height, or an area of text. If no region is specified, the whole image is pixelated.
Optional qualifiers
x, y (x & y coordinates) | w (width) | h (height) | g_ocr_text
Syntax details
Examples
preview
e_preview[:duration_<duration>][:max_seg_<max segments>][:min_seg_dur_<min segment duration>]
Generates a summary of a video based on Cloudinary's AI-powered preview algorithm, which identifies the most interesting video segments in a video and uses these to generate a video preview.
Optional qualifier
Learn more: Generate an AI-based video preview
Syntax details
Example
progressbar
e_progressbar[:type_<bar type>][:color_<bar color>][:width_<width>]
Adds a progress indicator overlay to a video.
Learn More: Add a video progress indicator
Syntax details
Examples
recolor
e_recolor:<value1>:<value2>:...:<value9>[<value10>:<value11>:...:<value16>]
Converts the colors of every pixel in an image based on a supplied color matrix, in which the value of each color channel is calculated based on the values from all other channels (e.g. a 3x3 matrix for RGB, a 4x4 matrix for RGBA or CMYK, etc).
Syntax details
Example
red
redeye
replace_color
e_replace_color:<to color>[:<tolerance>][:<from color>]
Maps an input color and those similar to the input color to corresponding shades of a specified output color, taking luminosity and chroma into account, in order to recolor an object in a natural way. More highly saturated input colors usually give the best results. It is recommended to avoid input colors approaching white, black, or gray.
- This transformation only supports non-verbose, ordered syntax, so remember to include the
tolerance
parameter if specifyingfrom color
, even if you intend to use the default tolerance. - Consider using e_gen_recolor if you want to specify particular elements in your image to recolor, rather than everything with the same color.
Learn more: Replace color
See also: e_gen_recolor
Syntax details
Examples
reverse
saturation
screen
e_screen
A qualifier that blends image layers using the screen blend mode, whereby the RGB channel numbers of the pixels in the two layers are inverted, multiplied, and then inverted again. This yields the opposite effect to multiply, and results in a brighter picture.
Use with: l_<image id> | l_fetch | l_text | u (underlay)
See also: Other blend modes: e_mask | e_multiply | e_overlay
Example
sepia
shadow
e_shadow[:<strength>]
Adds a gray shadow to the bottom right of an image. You can change the color and location of the shadow using the color and x,y qualifiers.
Optional qualifiers
x, y (x & y coordinates) | co (color)
Learn more: Shadow effect
See also: e_dropshadow
Syntax details
Example
sharpen
shear
e_shear:<x-skew>:<y-skew>
Skews an image according to the two specified values in degrees. Negative values skew an image in the opposite direction.
Syntax details
Example
simulate_colorblind
e_simulate_colorblind[:<condition>]
Simulates the way an image would appear to someone with the specified color blind condition.
Learn more: Blog post
Syntax details
Example
swap_image
e_swap_image:image_<image ref>[;index_<index>]
Replaces an image/texture in a 3D model.
Learn more: Changing the texture of a 3D model
Syntax details
Example
theme
e_theme:color_<bgcolor>[:photosensitivity_<level>]
Changes the main background color to the one specified, as if a 'theme change' was applied (e.g. dark mode vs light mode).
Learn more: Theme effect
Syntax details
Examples
tint
e_tint[:equalize][:<amount>][:<color1>][:<color1 position>][:<color2>][:<color2 position>][:...][:<color10>][:<color10 position>]
Blends an image with one or more tint colors at a specified intensity. You can optionally equalize colors before tinting and specify gradient blend positioning per color.
Learn more: Tint effects
Syntax details
Example
transition
e_transition
A qualifier that applies a custom transition between two concatenated videos.
Use with: l_video
Learn more: Concatenate videos with custom transitions
Example
trim
e_trim[:<tolerance>][:<color override>]
Detects and removes image edges whose color is similar to the corner pixels.
Syntax details
Example
unsharp_mask
upscale
e_upscale
Uses AI-based prediction to add fine detail while upscaling small images.
This 'super-resolution' feature scales each dimension by four, multiplying the total number of pixels by 16.
- To use the upscale effect, the input image must be smaller than 4.2 megapixels (the equivalent of 2048 x 2048 pixels).
- There is a special transformation count for the upscale effect.
- The upscale effect is not supported for animated images, fetched images or incoming transformations.
- Initial transformation requests may result in a 423 error response while the transformation is being processed. You can prepare derived versions in advance using an eager transformation.
Learn more: Upscaling with super resolution
Example
vectorize
e_vectorize[:<colors>][:<detail>][:<despeckle>][:<paths>][:<corners>]
Vectorizes an image. The values can be specified either in an ordered manner according to the above syntax, or by name as shown in the examples below.
- To deliver an image as a vector image, make sure to change the format (or URL extension) to a vector format, such as SVG. However, you can also deliver in a raster format if you just want to get the 'vectorized' graphic effect.
- Large images are scaled down to 1000 pixels in the largest dimension before vectorization.
Syntax details
Examples
vibrance
viesus_correct
e_viesus_correct[:no_redeye][:skin_saturation[_<saturation level>]]
Requires the Viesus Automatic Image Enhancement add-on.
Enhances an image to its best visual quality.
Syntax details
Examples
vignette
volume
e_volume[:<volume>]
Increases or decreases the volume on a video or audio file.
Syntax details
Example
zoompan
e_zoompan[:mode_<mode>][;maxzoom_<max zoom>][;du_<duration>][;fps_<frame rate>][;from_([g_<gravity>][;zoom_<zoom>][;x_<x position>][;y_<y position>])][;to_([g_<gravity>][;zoom_<zoom>][;x_<x position>][;y_<y position>])]
Also known as the Ken Burns effect, this transformation applies zooming and/or panning to an image, resulting in a video or animated GIF (depending on the format you specify by either changing the extension or using the format parameter).
You can either specify a mode, which is a predefined type of zoom/pan, or you can provide custom start and end positions for the zoom and pan. You can also use the gravity
parameter to specify different start and end areas, such as objects, faces, and automatically determined areas of interest.
- The resulting video or animated GIF does not go outside the bounds of the original image. So, if you specify an x,y position of (0,0), for example, the center of the frame will be as close to the top left as possible, but will not be centered on that position.
- The resolution of your image needs to be sufficient for the zoom level that you choose to maintain good quality.
- To achieve the best visual quality, the output resolution of the resulting video or animated image should be less than or equal to the input image resolution divided by the maximum zoom level. For example, if your original image has a width of 1920 pixels, and your maximum zoom is 3.2, you should display the resulting video at a width of 600 pixels or less (e.g. chain
c_scale,w_600
onto the end of the transformation). - If you apply the
zoompan
effect to an animated image, the first frame of the animated image is taken as the input. - To achieve a smoother zoom, you can increase the frame rate, extend the length of the time over which the zoom occurs, and reduce the difference between zoom levels at the start and end of the transformation.
- The
zoompan
effect won't work if the resulting video exceeds the limits set for your account. As a general rule, use images that don't exceed 5000 x 5000 pixels. - Currently, you can't use automatic gravity (
g_auto
) in other transformation components that are chained with thezoompan
effect.
Learn more: The zoompan effect | Using objects with the zoompan effect
Syntax details
Examples
eo (end offset)
eo_<time value>
Specifies the last second to include in a video (or audio clip). This parameter is often used in conjunction with the so (start offset) and/or du (duration) parameters.
- Can be used independently to trim a video (or audio clip) by specifying the last second of the video to include. Everything after that second is trimmed off.
- Can be used as a qualifier to control the timing of a corresponding transformation.
As a qualifier, use with: e_boomerang | l_audio | l_<image id> | l_video
Learn more: Trimming videos | Adding image overlays to videos | Adding video overlays
Syntax details
Examples
f (format)
Converts (if necessary) and delivers an asset in the specified format regardless of the file extension used in the delivery URL.
Must be used for automatic format selection (f_auto) as well as when fetching remote assets, while the file extension for the delivery URL remains the original file extension.
In most other cases, you can optionally use this transformation to change the format as an alternative to changing the file extension of the public ID in the URL to a supported format. Both will give the same result.
fetch_format
. These SDKs also have a format
parameter, which is not a transformation parameter, but is used to change the file extension, as shown in the file extension examples - #2. The later SDKs have a single format
parameter (which parallels the behavior of the fetch_format
parameter of older SDKs). You can use this to change the actual delivered format of any asset, but if you prefer to convert the asset to a different format by changing the extension of the public ID in the generated URL, you can do that in these later SDKs by specifying the desired extension as part of the public ID value, as shown in file extension examples - #1.
<supported format>
f_<supported format>
Converts and delivers an asset in the specified format.
Optional qualifier
Learn more: Fetch format | Transcoding videos to other formats
Syntax details
Examples
auto
f_auto[:media type]
Automatically generates (if needed) and delivers an asset in the optimal format for the requesting browser in order to minimize the file size.
Optionally, include a media type to ensure the asset is delivered with the desired media type when no file extension is included. For example, when delivering a video using f_auto
and no file extension is provided, the media type will default to an image unless f_auto:video
is used.
Learn more: Image automatic format selection | Video automatic format selection
Optional qualifiers
fl_animated | fl_attachment | fl_preserve_transparency
Syntax details
Examples
fl (flag)
Alters the regular behavior of another transformation or the overall delivery behavior.
.
).alternate
fl_alternate:lang_<language>[;name_<name>]
Defines an audio layer to be used as an alternate audio track for videos delivered using automatic streaming profile selection. Used to provide multiple audio tracks, for example when you want to provide audio in multiple languages.
Use with: l_audio
Learn more: Defining alternate audio tracks
Syntax details
Example
animated
fl_animated
Alters the regular video delivery behavior by delivering a video file as an animated image instead of a single frame image, when specifying an image format that supports both still and animated images, such as webp
or avif
.
Use with: fl_apng | fl_awebp | f_auto
page
parameter.Learn more: Converting videos to animated images
Example
any_format
fl_any_format
Alters the regular behavior of the q_auto parameter, allowing it to switch to PNG8 encoding if the automatic quality algorithm decides that's more efficient.
Use with: q_auto
apng
fl_apng
The apng
(animated PNG) flag alters the regular PNG delivery behavior by delivering an animated image asset in animated PNG format rather than a still PNG image. Keep in mind that animated PNGs are not supported in all browsers and versions.
Use with: fl_animated | f_png (or when specifying png
as the delivery URL file extension).
Example
attachment
fl_attachment[:<filename>]
Alters the regular delivery URL behavior, causing the URL link to download the (transformed) file as an attachment rather than embedding it in your Web page or application.
Use with: f_auto
See also: fl_streaming_attachment
Syntax details
Example
awebp
fl_awebp
The awebp
(animated WebP) flag alters the regular WebP delivery behavior by delivering an animated image or video asset in animated WebP format rather than as a still WebP image. Keep in mind that animated WebPs are not supported in all browsers and versions.
Use with: fl_animated | f_webp (or when specifying webp
as the delivery URL file extension).
Example
c2pa
fl_c2pa
Use the c2pa
flag when delivering images that you want to be signed by Cloudinary for the purposes of C2PA (Coalition for Content Provenance and Authenticity).
Learn more: Content provenance and authenticity
Example
clip
fl_clip
Trims pixels in an image according to a clipping path that was saved with the originally uploaded image. For example, a clipping path that was manually created using PhotoShop.
If there are multiple paths stored in the file, you can indicate which clipping path to use by specifying either the path number or name as the value of the page parameter (pg in URLs).
Use with: pg (page or file layer)
Examples
clip_evenodd
fl_clip_evenodd
Trims pixels in an image according to a clipping path that was saved with the originally uploaded image using an 'evenodd' clipping rule to determine whether points are inside or outside of the path.
Example
cutter
fl_cutter
Trims the pixels on the base image according to the transparency levels of a specified overlay image. Where the overlay image is opaque, the original is kept and displayed, and wherever the overlay is transparent, the base image becomes transparent as well. This results in a delivered image displaying the base image content trimmed to the exact shape of the overlay image.
Learn more: Shape cutouts: keep a shape
Examples
draco
fl_draco
Optimizes the mesh buffer in glTF 3D models using Draco compression.
Learn more: Applying Draco compression to glTF files
Example
force_icc
fl_force_icc
Adds ICC color space metadata to an image, even when the original image doesn't contain any ICC data.
force_strip
fl_force_strip
Instructs Cloudinary to clear all image metadata (IPTC, Exif and XMP) while applying an incoming transformation.
getinfo
fl_getinfo
For images: returns information about both the input asset and the transformed output asset in JSON instead of delivering a transformed image.
For videos: returns an empty JSON file unless one of the qualifiers below is used.
Not applicable to files delivered in certain formats, such as animated GIF, PDF and 3D formats.
As a qualifier, returns additional data as detailed below.
Use with:
-
g_auto:
- For images, the returned JSON includes the cropping coordinates recommended by the
g_auto
algorithm. - For videos, the returned JSON includes the cropping confidence score for the whole video and per second in addition to the horizontal center point of each frame (on a scale of 0 to 1) recommended by the
g_auto
algorithm.
- For images, the returned JSON includes the cropping coordinates recommended by the
- g_<face-specific-gravity>: For images, the returned JSON includes the coordinates of facial landmarks relative to the top-left corner of the original image.
- e_preview: For videos, the returned JSON includes an importance histogram for the video.
Learn more:
- Note on using fl_getinfo with g_auto
- Returning the coordinates of facial landmarks
- Returning video importance histograms
Examples
group4
fl_group4
Applies Group 4 compression to the image. Currently applicable to TIFF files only. If the original image is in color, it is transformed to black and white before the compression is applied.
Use with: f_tiff (or when specifying tiff
as the delivery URL file extension)
Example
hlsv3
fl_hlsv3
A qualifier that delivers an HLS adaptive bitrate streaming file as HLS v3 instead of the default version (HLS v4).
This flag is supported only for product environments with a private CDN configuration.
Use with: sp (streaming profile)
Learn more: Adaptive bitrate streaming
ignore_aspect_ratio
fl_ignore_aspect_ratio
A qualifier that adjusts the behavior of scale cropping. By default, when only one dimension (width or height) is supplied, the other dimension is automatically calculated to maintain the aspect ratio. When this flag is supplied together with a single dimension, the other dimension keeps its original value, thus distorting an image by scaling in only one direction.
Use with: c_scale
Example
ignore_mask_channels
fl_ignore_mask_channels
A qualifier that ensures that an alpha channel is not applied to a TIFF image if it is a mask channel.
Use with: f_tiff (or when specifying tiff
as the delivery URL file extension)
immutable_cache
fl_immutable_cache
Sets the cache-control for an image to be immutable, which instructs the browser that an image does not have to be revalidated with the server when the page is refreshed, and can be loaded directly from the cache. Currently supported only in Firefox.
keep_attribution
keep_attribution
Cloudinary's default behavior is to strip almost all metadata from a delivered image when generating new image transformations. Applying this flag alters this default behavior, and keeps all the copyright-related fields while still stripping the rest of the metadata.
Learn more: Default optimizations
See also: fl_keep_iptc
keep_dar
fl_keep_dar
Keeps the Display Aspect Ratio (DAR) metadata of an originally uploaded video (if it's different from the delivered video dimensions).
keep_iptc
fl_keep_iptc
Cloudinary's default behavior is to strip almost all embedded metadata from a delivered image when generating new image transformations. Applying this flag alters this default behavior, and keeps all of an image's embedded metadata in the transformed image.
Learn more: Default optimizations
See also: fl_keep_attribution
Example
layer_apply
fl_layer_apply
A qualifier that enables you to apply chained transformations to an overlaid image or video. The first component of the overlay (l_<image_id>
) acts as an opening parentheses of the overlay transformation and the fl_layer_apply
component acts as the ending parentheses. Any transformation components between these two are applied as chained transformations to the overlay and not to the base asset.
This flag is also required when concatenating images to videos or concatenating videos with custom transitions.
Use with: l_<image id> | l_audio | l_video
Learn more:
- Applying multiple transformations to overlays
- Concatenate videos with images
- Concatenate videos with custom transitions
Example
lossy
fl_lossy
When used with an animated GIF file, instructs Cloudinary to use lossy compression when delivering an animated GIF. By default a quality of 80 is applied when delivering with lossy compression. You can use this flag in conjunction with a specified q_<quality_level>
to deliver a higher or lower quality level of lossy compression.
When used while delivering a PNG format, instructs Cloudinary to deliver an image in PNG format (as requested) unless there is no transparency channel, in which case, deliver in JPEG format instead.
Use with: f_gif with or without q_<quality level> | f_png
(or when specifying gif
or png
as the delivery URL file extension)
Learn more: Applying lossy GIF compression
Example
mono
fl_mono
Converts the audio channel in a video or audio file to mono. This can help to optimize your video files if stereo sound is not essential.
Example
no_overflow
fl_no_overflow
A qualifier that prevents an image or text overlay from extending a delivered image canvas beyond the dimensions of the base image
Use with: l_<image id> | l_text
See also: fl_text_disallow_overflow
Example
no_stream
fl_no_stream
Prevents a video that is currently being generated on the fly from beginning to stream until the video is fully generated.
png8 / png24 / png32
fl_png8
fl_png24
fl_png32
By default, Cloudinary delivers PNGs in PNG-24 format, or if f_auto and q_auto are used, these determine the PNG format that minimizes file size while maximizing quality. In some cases, the algorithm will select PNG-8. By specifying one of these flags when delivering a PNG file, you can override the default Cloudinary behavior and force the requested PNG format.
See also: fl_any_format
preserve_transparency
fl_preserve_transparency
A qualifier that ensures that the f_auto parameter will always deliver in a transparent format if the image has a transparency channel.
Use with: f_auto
progressive
fl_progressive[:<mode>]
Generates a JPG or PNG image using the progressive (interlaced) format. This format allows the browser to quickly show a low-quality rendering of the image until the full quality image is loaded.
Syntax details
rasterize
fl_rasterize
Reduces a vector image to one flat pixelated layer, enabling transformations like PDF resizing and overlays.
region_relative
fl_region_relative
A qualifier that instructs Cloudinary to interpret percentage-based ( e.g. 0.8) width and height values for an image layer (overlay or underlay), as a percentage that is relative to the size of the defined or automatically detected region(s). For example, the region may be the coordinates of an automatically detected face or piece of text, or a custom-defined region. If an image has multiple regions, then the specified overlay image will be overlaid over each identified region at a size relative to the region it overlays.
Use with: l_<image id> | u (underlay)
AND
one of the following special gravities: adv_eyes
, adv_faces
, custom
face
, faces
, ocr_text
Learn more:
- Placing images over detected faces
- Advanced Facial Attributes Detection Add-on - Placing images over faces and eyes
- OCR Text Detection and Extraction Add-on - Overlaying text with images
Examples
relative
fl_relative
A qualifier that instructs Cloudinary to interpret percentage-based ( e.g. 0.8) width and height values for an image layer (overlay or underlay), as a percentage that is relative to the size of the base image, rather than relative to the original size of the specified overlay image. This flag enables you to use the same transformation to add an overlay to images that will always resize to a relative size of whatever image it overlays.
Use with: l_<image id> | u (underlay)
Learn more: Transforming overlays
Example
replace_image
fl_replace_image
A qualifier that takes the image specified as an overlay and uses it to replace the first image embedded in a PDF.
Transformation parameters that modify the appearance of the overlay (such as effects) can be applied. However, when this flag is used, the overlay image is always scaled exactly to the dimensions of the image it replaces. Therefore, resize transformations applied to the overlay are ignored. For this reason, it is important that the image specified in the overlay matches the aspect ratio of the image in the PDF that it will replace.
Use with: l_<image_id>
Example
sanitize
fl_sanitize
Relevant only for the SVG images. Runs a sanitizer on the image.
splice
fl_splice[:transition[_([name_<transition name>][;du_<transition duration>])]]
A qualifier that concatenates (splices) the image, video or audio file specified as an overlay to a base video (instead of placing it as an overlay). By default, the overlay image, video or audio file is spliced to the end of the base video. You can use the start offset parameter set to 0
(so_0
) to splice the overlay asset to the beginning of the base video by specifying it alongside fl_layer_apply
. You can optionally provide a cross fade transition between assets.
Learn more: Concatenating media
See also: so (start offset)
Syntax details
Examples
streaming_attachment
fl_streaming_attachment[:<filename>]
Like fl_attachment, this flag alters the regular video delivery URL behavior, causing the URL link to download the (transformed) video as an attachment rather than embedding it in your Web page or application. Additionally, if the video transformation is being requested and generated for the first time, this flag causes the video download to begin immediately, streaming it as a fragmented video file. (Most standard video players successfully play fragmented video files without issue.)
(In contrast, if the regular fl_attachment
flag is used, then when a user requests the video transformation for the first time, the download will begin only after the complete transformed video has been generated.)
fl_attachment
flag.See also: fl_attachment
Syntax details
Example
strip_profile
fl_strip_profile
Alters the regular video delivery URL behavior, by removing all ICC color profile data from the delivered image.
text_disallow_overflow
fl_text_disallow_overflow
A qualifier used with text overlays that fails the transformation and returns a 400 (bad request) error if the text (in the requested size and font) exceeds the base image boundaries. This can be useful if the expected text of the overlay and/or the size of the base image isn't known in advance, for example with user-generated content. You can check for this error and if it occurs, let the user who supplied the text know that they should change the font, font size, or number of characters (or alternatively that they should provide a larger base image).
Use with: l_text
See also: fl_no_overflow
Example
text_no_trim
fl_text_no_trim
A qualifier used with text overlays that adds a small amount of padding around the text overlay string. Without this flag, text overlays are trimmed tightly to the text with no excess padding.
Use with: l_text
tiff8_lzw
fl_tiff8_lzw
A qualifier that generates TIFF images in the TIFF8 format using LZW compression.
Use with: f_tiff (or when specifying tiff
as the delivery URL file extension)
tiled
fl_tiled
A qualifier that tiles the specified image overlay over the entire image. This can be useful for adding a watermark effect.
Use with: l_<image id>
Learn more: Automatic tiling
Example
truncate_ts
fl_truncate_ts
Truncates (trims) a video file based on the times defined in the video file's metadata (relevant only where the file metadata includes a directive to play only a section of the video).
waveform
fl_waveform
Instead of delivering the audio or video file, generates and delivers a waveform image in the requested image format based on the audio from the audio or video file. by default, the waveform color is white and the background is black. You can customize these using the co_<color> and b_<color value>
Optional qualifiers
Learn more: Auto-generated waveform images
Example
fn (custom function)
fn_<function type>:<source>
Injects a custom function into the image transformation pipeline. You can use a remote function/lambda as your source, run WebAssembly functions from a compiled .wasm file stored in your Cloudinary product environment, deliver assets based on filters using tags and structured metadata, or filter assets returned when generating a client-side list.
Learn more: Custom functions
Syntax details
Example
fps (FPS)
fps_<frames per second>[-<maximum frames per second>]
Controls the FPS (Frames Per Second) of a video or animated image to ensure that the asset (even when optimized) is delivered with an expected FPS level (for video, this helps with sync to audio). Can also be specified as a range.
Syntax details
Examples
g (gravity)
A qualifier that determines which part of an asset to focus on, and thus which part of the asset to keep, when any part of the asset is cropped. For overlays, this setting determines where to place the overlay.
Learn more: Control image gravity | Control video gravity
<compass position>
g_<compass position>
A qualifier that defines a fixed location within an asset to focus on.
Use with: c_auto - image only | c_crop | c_fill | c_lfill | c_lpad | c_mpad | c_pad | c_thumb | l_<image id> | l_fetch | l_text | l_subtitles | l_video | u (underlay) | x, y (x & y coordinates)
Syntax details
Example
<special position>
g_<special position>
A qualifier that defines a special position within the asset to focus on.
custom
. If other positions are specified in an animated image transformation, center
gravity is applied. Use with: c_auto | c_crop | c_fill | c_lfill | c_scale (for g_liquid
only) | c_thumb | e_pixelate_region | l_<image id> | l_fetch | l_text | u (underlay) | x, y (x & y coordinates)
See also: fl_getinfo
Syntax details
Examples
<object>
g_<object>
Requires the Cloudinary AI Content Analysis add-on.
A qualifier for cropping an image to automatically crop around objects without needing to specify dimensions or an aspect ratio.
g_<object>
is used in an animated image transformation, center
gravity is applied. Use with: c_auto | c_crop | c_fill | c_lfill | c_thumb
Learn more: Cloudinary AI Content Analysis add-on
Syntax details
Example
auto
g_auto[:<algorithm>][:<focal gravity>][:<thumb aggressiveness>][:thirds_0]
A qualifier to automatically identify the most interesting regions in the asset, and include in the crop.
- Automatic gravity is not supported for animated images. If
g_auto
is used in an animated image transformation,center
gravity is applied, except whenc_fill_pad
is also specified, in which case an error is returned.
- Any custom coordinates defined for a specific image will override the automatic cropping algorithm and only the custom coordinates will be used 'as is' for the gravity, unless you specify 'custom_no_override' or 'none' as the focal_gravity.
Use with: c_auto - image only | c_auto_pad - image only | c_crop - image only | c_fill | c_fill_pad | c_lfill - image only | c_thumb - image only
Learn more: Automatic image cropping | Automatic video cropping
See also: fl_getinfo
Syntax details
Examples
region
g_region_!<region name>!
A qualifier to specify a named custom region in the image to focus on.
- You can set named custom regions using the
regions
parameter of the upload, explicit or update methods. - You can see the coordinates of named regions in images:
- Using the list delivery type. Apply a tag to the image and use the syntax,
https://res.cloudinary.com/<your_cloud_name>/image/list/<tag>.json
- In the response to a request for details of a resource, for example:
- Using the list delivery type. Apply a tag to the image and use the syntax,
Use with: c_auto | c_crop | c_fill | c_lfill | c_lpad | c_mpad | c_pad | c_thumb | l_<image id> | l_fetch | l_text | u (underlay)
Learn more: Custom regions
Syntax details
Example
track_person
g_track_person[:obj_<object>][;position_<position>][;adaptivesize_<size>]
A qualifier to add an image or text layer that tracks the position of a person throughout a video. Can be used with fashion object detection to conditionally add the layer based on the presence of a specified object.
- Only one tracked layer can be applied at a time.
- The maximum video duration that tracked layers can be applied to is 3 minutes.
- When requesting your video on the fly, you will receive a 423 response until the video has been processed. Once processed, subsequent transformations will be applied synchronously.
- You can apply transformations to the layer, such as controlling duration, by adding those into the layer definition component (e.g.
l_price_tag,du_3
)
Use with: l_<image id> | l_fetch | l_text | u (underlay)
Syntax details
Example
h (height)
h_<height value>
A qualifier that determines the height of a transformed asset or an overlay.
Use with: c (crop/resize) | l (layer) | e_blur_region | e_pixelate_region | u (underlay)
Learn more: Resizing and cropping images | Placing layers on images | Placing layers on videos
See also: w (width) | ar (aspect ratio) | Arithmetic expressions
Syntax details
Examples
if (if condition)
if_[<directive>][_<asset characteristic>_<operator>_<asset characteristic value>]
Applies a transformation only if a specified condition is met.
Learn more: Conditional image transformations | Conditional video transformations
See also: Arithmetic expressions
Syntax details
Examples
ki (keyframe interval)
ki_<interval value>
Explicitly sets the keyframe interval of the delivered video.
Syntax details
Example
l (layer)
Applies a layer over the base asset, also known as an overlay. This can be an image or video overlay, a text overlay, subtitles for a video or a 3D lookup table for images or videos.
You will often want to adjust the dimension and position of the overlay. You do this by using the w (width), h (height), x, y (x & y coordinates) and g (gravity) parameters with the overlay transformation.
In addition to these common overlay transformations, you can apply nearly any supported image or video transformation to an image or video overlay, including applying chained transformations, by using the fl_layer_apply flag to indicate the end of the layer transformations.
Learn more:
- Placing layers on images
- Placing layers on videos
- Layer transformation syntax for images and videos
- Applying 3D LUTs to images and videos
- Blending and masking layers
See also: u (underlay)
<image id>
l_<image id>
Overlays an image on the base image or video.
Adjust the dimensions and position of the overlay with the w (width), h (height), x, y (x & y coordinates) and g (gravity) parameters with the overlay transformation.
Optional qualifiers
| bo (border) | du (duration) | e_anti_removal | e_multiply | e_overlay | e_screen | eo (end offset) | fl_no_overflow | fl_region_relative | g_<compass position> | g_<special position> | h (height) | so (start offset) | w (width) | x, y (x & y coordinates)
Learn more: Adding image overlays to images | Adding image overlays to videos
Syntax details
Examples
audio
l_audio:<audio id>
Overlays the specified audio track on a base video or another audio track. If you specify a video to overlay, only the audio track will be applied. You can use this to mix multiple audio tracks together or add additional audio tracks when using automatic streaming profile selection.
Optional qualifiers
du (duration) | eo (end offset) | fl_alternate| fl_layer_apply | fl_splice | so (start offset)
Learn more: Adding audio overlays | Mixing audio tracks | Defining alternate audio tracks
Syntax details
Example
fetch
l_fetch:<base64 encoded URL>
Overlays a remote image onto an image or video.
Adjust the dimensions and position of the overlay with the w (width), h (height), x, y (x & y coordinates) and g (gravity) parameters with the overlay transformation.
Optional qualifiers
bo (border) | du (duration) | e_anti_removal | e_multiply | e_overlay | e_screen | eo (end offset) | fl_no_overflow | fl_region_relative | g_<compass position> | g_<special position> | h (height) | so (start offset) | w (width) | x, y (x & y coordinates)
Learn more: Adding image overlays
Syntax details
Examples
lut
l_lut:<lut public id>
Applies a 3D lookup table (3D LUT) to an image or video. LUTs are used to map one color space to another. The LUT file must first be uploaded to Cloudinary as a raw file.
Learn more: Applying 3D LUTs to images | Applying 3D LUTs to videos
Syntax details
Examples
subtitles
l_subtitles:<subtitle id>
Embed subtitle texts from an SRT or WebVTT file into a video. The subtitle file must first be uploaded as a raw file.
You can optionally set the font and font-size (as optional values of your l_subtitles
parameter) as well as subtitle text color and either subtitle background color or subtitle outline color (using the co
and b
/bo
optional qualifiers). By default, the texts are added in Arial, size 15, with white text and black border.
Optional qualifiers
b_<color value> | bo (border) | co (color) | g_<compass position>
Learn more: Adding subtitles
Syntax details
Examples
text
l_text:<text style>:<text string>
Adds a text overlay to an image or video.
Adjust the dimensions and position of the overlay with the w (width), h (height), x, y (x & y coordinates) and g (gravity) parameters with the overlay transformation.
Optional qualifiers
b_<color value> | bo (border) | c_fit | co (color) | e_anti_removal | e_multiply | e_overlay | e_screen | fl_no_overflow | fl_text_disallow_overflow | fl_text_no_trim | g_<compass position> | g_<special position> | h (height) | w (width) | x, y (x & y coordinates)
Learn more: Adding text overlays to images | Adding text overlays to videos | Adding auto-line breaks
Syntax details
Styling parameters
Examples
video
l_video:<video id>
Overlays the specified video on a base video.
Adjust the dimensions and position of the overlay with the w (width), h (height), x, y (x & y coordinates) and g (gravity) parameters with the overlay transformation.
Optional qualifiers
bo (border) | du (duration) | e_transition | eo (end offset) | fl_layer_apply | fl_splice | g_<compass position> | h (height) | so (start offset) | w (width) | x, y (x & y coordinates)
Learn more: Adding video overlays
Syntax details
Example
o (opacity)
o_<opacity level>
Adjusts the opacity of an asset and makes it semi-transparent.
See also: Arithmetic expressions
Syntax details
Examples
p (prefix)
p_<prefix value>
Adds a prefix to all style class names in the CSS that is created for a sprite.
Learn more: Applying transformations to sprites
Syntax details
Example
pg (page or file layer)
Delivers specified pages, frames, or layers of a multi-page/frame/layer file, such as a PDF, animated image, TIFF, or PSD.
Learn more: Paged and layered media | Animated images
extract
.<number>
pg_<number>
Delivers a page or layer of a multi-page or multi-layer file (PDF, TIFF, PSD), or a specified frame of an animated image.
Optional qualifier
Syntax details
Example
<range>
pg_<range>
Delivers the specified range of pages or layers from a multi-page or multi-layer file (PDF, TIFF, PSD).
Syntax details
Example
embedded
pg_embedded:<index>
Extracts and delivers an object embedded in a PSD file, by index.
Syntax details
Example
pg_embedded:name:<layer name>
Extracts and delivers an object embedded in a PSD file, by layer name.
Syntax details
Example
name
q (quality)
q_<quality value>
Controls the quality of the delivered asset. Reducing the quality is a trade-off between visual quality and file size.
Learn more:
<quality level>
q_<quality level>[:<chroma>]
Sets the quality to the specified level.
q_<quality level>
for videos was deprecated in December 2023.See also: Arithmetic expressions
Syntax details
Examples
auto
q_auto[:<quality type>][:sensitive]
Delivers an asset with an automatically determined level of quality.
Learn more: Automatic quality and encoding settings
Related flag: fl_any_format
Syntax details
Examples
r (round corners)
Rounds the corners of an image or video.
Learn More: Rounding image corners | Rounding video corners
<radius>
r_<pixel value>
Rounds all four corners of an asset by the same pixel radius.
See also: Arithmetic expressions
Syntax details
Example
<selected corners>
r_<value1>[:<value2>][:<value3>][:<value4>]
Rounds selected corners of an image, based on the number of values specified, similar to the border-radius
CSS property:
Syntax details
max
r_max
Delivers the asset as a rounded circle or oval shape.
- If the input asset is a 1:1 aspect ratio, it will be a circle.
- If rectangular, it will be an oval.
Examples
so (start offset)
so_<time value>
Specifies the first second to include in the video (or audio clip). This parameter is often used in conjunction with the eo (end offset) and/or du (duration) parameters.
- Can be used independently to trim the video (or audio clip) by specifying the first second of the video to include. Everything prior to that second is trimmed off.
- Can be used as a qualifier to control the timing of a corresponding transformation.
- Can be used to indicate the frame of the video to use for generating video thumbnails.
As a qualifier, use with: e_boomerang | l_audio | l_<image id> | l_video
Learn more: Trimming videos | Adding video overlays | Adding audio overlays to videos | Adding image overlays to videos | Video thumbnails
See also: fl_splice
Syntax details
Examples
sp (streaming profile)
Determines the streaming profile to apply when delivering a video using adaptive bitrate streaming.
auto
sp_auto[:maxres_<maximum resolution>][;subtitles_<subtitles config>]
Lets Cloudinary choose the best streaming profile on the fly for both HLS and DASH. You can limit the resolution at which to stream the video by specifying the maximum resolution.
Learn more: Automatic streaming profile selection
Syntax details
Examples
<profile name>
sp_<profile name>[:subtitles_<subtitles config>]
Specifies the streaming profile to apply when delivering a video using HLS or MPEG-DASH adaptive bitrate streaming. Optionally allows for defining subtitles tracks for HLS, which will be defined as part of the manifest file.
Optional qualifier
Learn more: Adaptive bitrate streaming | Pre-defined streaming profiles | Create new custom streaming profiles
Syntax details
Examples
t (named transformation)
t_<transformation name>
Applies a pre-defined named transformation to an image or video.
Learn more: Named transformations | Create a named transformation
Syntax details
Example
u (underlay)
u_<image id>
Applies an image layer under the base image or video.
You can adjust the dimensions and position of the underlay using the w (width), h (height), x, y (x & y coordinates) and g (gravity) parameters with the underlay transformation.
In addition to these common underlay transformations, you can apply nearly any supported image transformation to an image underlay, including applying chained transformations, by using the fl_layer_apply flag to indicate the end of the layer transformations.
Optional qualifiers
e_anti_removal | e_multiply | e_overlay | e_screen | e_mask fl_region_relative | fl_relative | g_<compass position> | g_<special position> | h (height) | w (width) | x, y (x & y coordinates)
Learn more: Adding image underlays | Using the fl_layer_apply flag | Blending and masking layers
See also: l (layer)
Syntax details
Examples
vc (video codec)
Sets the video codec to use when encoding a video.
Learn more: Video codec settings
<codec value>
vc_<codec value>[:<profile>[:<level>][:bframes_<bframes>]]
Sets a specific video codec to use to encode a video. For h264
, optionally include the desired profile and level.
Syntax details
Examples
auto
vc_auto
Normalizes and optimizes a video by automatically selecting the most appropriate codec based on the output format.
The settings for each format are:
Format | Video codec | Profile | Quality | Audio Codec | Audio Frequency |
---|---|---|---|---|---|
MP4 | h264 |
high 1
|
auto:good |
aac |
22050 |
WebM |
vp9 2
|
N/A | auto:good |
vorbis |
22050 |
OGV | theora |
N/A | auto:good |
vorbis |
22050 |
- For older Cloudinary accounts the default is
baseline
. Submit a support request to change this default. - For older Cloudinary accounts the default is
vp8
. Submit a support request to change this default.
Optional qualifiers
Example
none
vc_none
Removes the video codec to leave just the audio, useful when you want to extract the audio from a video.
Example
vs (video sampling)
vs_<sampling rate>
Sets the sampling rate to use when converting videos or animated images to animated GIF or WebP format. If not specified, the resulting GIF or WebP samples the whole video/animated image (up to 400 frames, at up to 10 frames per second). By default, the duration of the resulting animated image is the same as the duration of the input, no matter how many frames are sampled from the original video/animated image (use the dl (delay) parameter to adjust the amount of time between frames).
Related flag: fl_animated
Learn more: Converting videos to animated images
Syntax details
Examples
w (width)
A qualifier that sets the desired width of an asset using a specified value, or automatically based on the available width.
<width value>
w_<width value>
A qualifier that determines the width of a transformed asset or an overlay.
Use with: c (crop/resize) | l (layer) | e_blur_region | e_pixelate_region | u (underlay)
Learn more: Resizing and cropping images | Placing layers on images | Placing layers on videos
See also: h (height) | ar (aspect ratio) | Arithmetic expressions
Syntax details
Examples
auto
A qualifier that determines how to automatically resize an image to match the width available for the image in a responsive layout. The parameter can be further customized by overriding the default rounding step or by using automatic breakpoints.
w_auto[:<rounding step>][:<fallback width>]
The width is rounded up to the nearest rounding step (every 100 pixels by default) in order to avoid creating extra derived images and consuming too many extra transformations. Only works for certain browsers and when Client-Hints are enabled.
Use with: c_limit
Learn more: Automatic image width
Syntax details
Examples
w_auto:breakpoints[_<breakpoint settings>][:<fallback width>][:json]
The width is rounded up to the nearest breakpoint, where the optimal breakpoints are calculated using either the default breakpoint request settings or using the given settings.
Use with: c_limit
Learn more: Responsive breakpoint request settings
Syntax details
Examples
x, y (x & y coordinates)
x/y_<coordinate value>
A qualifier that adjusts the starting location or offset of the corresponding transformation action.
Action | Effect of x & y coordinates |
---|---|
c_crop | The top-left coordinates of the crop (positive x = right, positive y = down). |
e_blur_region | The top-left coordinates of the blurred region (positive x = right, positive y = down). |
e_displace | See Displacement maps. |
e_gradient_fade | Positive values fade from the top (y) or left (x). Negative values fade from the bottom (y) or right (x). Values between 0.0 and 1.0 indicate a percentage. Integer values indicate pixels. |
e_pixelate_region | The top-left coordinates of the pixelated region (positive x = right, positive y = down). |
e_shadow | The offset of the shadow relative to the image in pixels. Positive values offset the shadow right (x) or down (y). Negative values offset the shadow left (x) or up (y). |
g_<compass position> | Offset the compass position, e.g. when positioning overlays:
|
g_xy_center | The coordinates of the center of gravity (positive x = right, positive y = down). |
l_layer u (underlay) |
The offset of the layer according to the compass position (see above). If no compass position is specified, center is assumed. |
Use with: c_crop | e_blur_region | e_displace | e_gradient_fade | e_pixelate_region | e_shadow | g_<compass position> | g_<special position> | l_layer | u (underlay)
Learn more: Controlling gravity | Placing overlays
See also: Arithmetic expressions
Syntax details
Examples
z (zoom)
z_<zoom amount>
A qualifier that controls how close to crop to the detected coordinates when using face-detection, custom-coordinate, or object-specific gravity (when using the Cloudinary AI Content Analysis addon).
Use with: c_auto | c_crop | c_thumb
- When used with
thumb
resize mode, the detected coordinates are scaled to completely fill the requested dimensions and then cropped as needed. - When used with the
crop
resize mode, the zoom qualifier has an impact only if resize dimensions (height and/or width) are not specified. In this case, the crop dimensions are determined by the detected coordinates and then adjusted based on the requested zoom.
Learn more: Creating image thumbnails
See also: Arithmetic expressions
Syntax details
Examples
$ (variable)
$<variable name>[_<variable value>]
Defines and assigns values to user defined variables, so you can use the variables as values for other parameters.
Learn more: User-defined variables in image transformations | User-defined variables in video transformations
See also: Arithmetic expressions