Welcome to Linux Knowledge Base and Tutorial
"The place where you learn linux"
Traveller''s Lunchbox

 Create an AccountHome | Submit News | Your Account  

Tutorial Menu
Linux Tutorial Home
Table of Contents

· Introduction to Operating Systems
· Linux Basics
· Working with the System
· Shells and Utilities
· Editing Files
· Basic Administration
· The Operating System
· The X Windowing System
· The Computer Itself
· Networking
· System Monitoring
· Solving Problems
· Security
· Installing and Upgrading
· Linux and Windows

Man Pages
Linux Topics
Test Your Knowledge

Site Menu
Site Map
Copyright Info
Terms of Use
Privacy Info
Masthead / Impressum
Your Account

Private Messages

News Archive
Submit News
User Articles
Web Links


The Web

Who's Online
There are currently, 327 guest(s) and 0 member(s) that are online.

You are an Anonymous user. You can register for free by clicking here




       lav* -h prints out a command synopsis
       yuv* -h prints out a command synopsis
       mpeg2enc|mp2enc -?  print out a command synopsis

What you can expect from the text

       Recording videos
       Creating videos from images
       Checking if recording was successful
       Edit the video
       Creating movie transitions
       Converting the stream to MPEG or DIVx videos
       Creating sound
       Converting video
       Putting the streams together
       Creating MPEG1 Videos
       Creating MPEG2 Videos
       Creating VideoCD's
       Creating SVCD's
       Creating DIVX Videos
       Optimizing the stream
       Trading Quality/Speed
       SMP and distributed encoding


       The mjpegtools are a set of programs that can  do  record­
       ing,  playback,   editing and eventual MPEG compression of
       audio and video under Linux.

       Although primarily intended for use with capture  /  play­
       back  boards  based on the Zoran ZR36067 MJPEG codec chip,
       the mjpegtools can easily be used to process and  compress
       MJPEG  video  streams  captured  using  xawtv or bcast2000
       using simple frame-buffer devices.


       This isn't really a man page (yet).  Its a  modified  ver­
       sion  of  the  HOWTO  for the tools intended to give an an
       introduction to the MJPEG-tools and the creation  of  MPEG
       1/2  videos. VCD and SVCD, and the transcoding of existing
       mpeg streams.

       For more information about the programs  read  the  corre­
       If  you  compile  the  tools  on another platform, not all
       tools might work.  The v4l (video4linux) stuff  will  very
       likely fail to work.

       Start  xawtv  to  see if you get a picture. If you want to
       use HW-playback of the recorded streams you have to  start
       xawtv  (any  TV application works) once to get the streams
       played back. You should also check the  settings  of  your
       mixer in the sound card.

       Never  try to stop or start the TV application when lavrec
       runs. If you start or stop the TV application lavrec  will
       stop recording, or your computer could get "frozen".

       One last thing about the data you get before we start:
       Audio: ( Samplerate * Channels * Bitsize ) / (8 * 1024)
       CD  Quality : (44100 Samples/sec * 2 Channels * 16 Bit ) /
       (8 * 1024) = 172,2 kB/sec
       The 8 * 1024 convert the value from bit/sec to kByte/sec

       Video: (width * height * framerate * quality )/ (200*1024)
       PAL HALF Size : (352*288*25*80) / (200*1024) = 990  kB/sec
       PAL FULL size : (768*576*25*80) / (200*1024) = 4320 kB/sec
       NTSC HALF size: (320*200*30*80) / (200*1024) = 750  kB/sec
       NTSC FULL size: (640*480*30*80) / (200*1024) = 3600 kB/sec

       The  1024 converts the Bytes to kBytes. Not every card can
       record the size mentioned. The  Buz  an  Marvel  G400  for
       example  can only record a size of 720x576 when using -d 1
       , the DC10 records a size of 384x288 when using -d 2.

       When you add the audio and video data rates, this is  what
       your  hard disk has to be able to write constantly stream­
       ing, else you will have lost frames.

       If you want to play with the --mjpeg-buffer-size, remember
       the  value  should  be at least so big that one frame fits
       into it. The size of one frame is: (width * height * qual­
       ity  )  / (200 * 1024) = kB If the buffer is too small the
       rate calculation doesn't match any more and  buffer  over­
       flows can happen.

       How  video  works,  and  the  difference between the video
       types is explained here:

       There you will also find how to create MPEG  Still  Images
       for VCD/SVCD


       Should start recording now,
       -f q : use Quicktime as output format,
       -i n : use Composit-In with NTSC format,
       -d 1 : record pictures with full size (640x480)
       -q 80: set the quality to 80% of the captured image
       -s   : use stereo mode (default mono)
       -l  80:  set  the recording level to 80% of the max during
       -R l : set the recording source to Line-In
       -U   : With this lavrec uses the read instead of mmap  for
       recording  this is needed if your sound card does not sup­
       port the mmap for recording.

       The cards record at a different size in PAL when recording
       at -d 1 :
       BUZ and LML33: 720x576 and the DC10: 768x576

       Other example:
       > lavrec -w -f a -i S -d 2 -l -1 record%02d.avi

       Should start recoding:
       -w   : Waits for user confirmation to start (press enter)
       -f a : use AVI as output format,
       -i  S  : use SECAM SVHS-Input (SECAM Composit recording is
       also possible: -i s)
       -d 2 : the size of the pictures are half size
       -l -1: do not touch the mixer settings.
       record%02d.avi : Here lavrec creates the first file  named
       record00.avi  after  the  file has reached a size of 1.6GB
       (after  about  20  Minutes  recording)  it  starts  a  new
       sequence  named  record01.avi and so on till the recording
       is stopped or the disk is full.

       Other example:
       > lavrec -f a -i t -q 80 -d 2 -C europe-west:SE20 test.avi
       Should start recording now,
       -f a : use AVI as output format,
       -i t : use tuner input
       -q 80: set the quality to 80% of the captured image
       -d 2 : the size of the pictures are half size (352x288)
       -C  ..:choose  TV  channels, and the corresponding -it and
       -iT (video source: TV tuner) can currently be used on  the
       Marvel  G200/G400  and the Matrox Millenium G200/G400 with
       Rainbow Runner extension (BTTV-Support is under  construc­
       tion).   For  more information on how to make the TV tuner
       parts of these cards work, see  the  Marvel/Linux  project
       on: http://marvel.sourceforge.net

       Last example:
       >  lavrec  -f  a  -i p -d 2 -q 80 -s -l 70 -R l -g 352x288
       Note: More options are described in the lavrec man-page.

       There  are  more options, but with this you should be able
       to start.

       How about some hints as to sensible settings. I habitually
       turn quality to 80% or more for -d 2 capture. At full res­
       olution as low as 40% seems to be visually "perfect".   -d
       2  is already better than VHS video (a *lot*!).  If you're
       aiming to create VCD's then there is little to  be  gained
       recording at full resolution as you need to reduce to -d 2
       resolution later anyway.

       Some information about the  typical  lavrec  output  while
       0.06.14:22 int: 00040 lst:0  ins:0  del:0  ae:0  td1=0.014

       It should look like this. The fist  part  shows  the  time
       lavrec  is  recording.   int:  the  interval  between  two
       frames. lst: the number of lost frames.  ins and del:  are
       the number of frames inserted and deleted for sync correc­
       tion.  ae: number of audio errors.  td1 and td2:
        are the audio/video time-difference.

       (int) frame interval
            should be around 33 (NTSC) or 40 (PAL/SECAM).  If  it
            is  very different, you'll likely get a bad recording
            and/or many lost frames

       (lst) lost frames
            are bad and mean that something is not  working  very
            well  during  recording  (too  slow  HD, too high CPU
            usage, ...) Try recording at a with a greater  decli­
            mation, and a lower quality

       (ins, del) inserted OR deleted frames
            of them are normal -> sync. If you have many lost AND
            inserted frames, you're asking too much, your machine
            can't  handle it. Take less demanding options, try to
            use an other sound card.

       (ae) audio errors
            are never good. Should be 0

       (td1, td2) time difference
            is always floating around 0, unless  sync  correction
            is disabled (--synchronization!=2, 2 is default).

       Notes about: interlace field order - what can go wrong and
       where  each  Film frame is sent as a pair of fields. These
       can be sent top or bottom field first and  sadly  its  not
       always  the same, though bottom-first appears to be usual.
       If you capture with the wrong field order (you start  cap­
       turing  each frame with a bottom rather than a top or vice
       versa) the frames of the movie get split *between*  frames
       in  the  stream.  Played  back on a TV where each field is
       displayed on its own this is  harmless.  The  sequence  of
       fields  played back is exactly the same as the sequence of
       fields broadcast. Unfortunately, playing back  on  a  Com­
       puter  monitor where both fields of a frame appear at once
       it looks *terrible* because each frame is effectively mix­
       ing two moments in time 1/25sec apparent.

       2.  The  two  fields can simply be swapped somehow so that
       top gets treat as bottom and bottom treat as  top.    Jud­
       dering and "slicing" is the result.

       3.  Somewhere  in capturing/processing the *order* in time
       of the two fields in each frame can get  mislabeled  some­
       how.   This  is  not  good  as it means that when playback
       eventually takes place a field containing an image sampled
       earlier  in  time  comes  after  an  image  sampled later.
       Weird "juddering" effects are the results.

       How can I recognize if I have one of these Problems?

       1.  This can be hard to spot.  If  you  have  mysteriously
       flickery  pictures  during playback try encoding a snippet
       with the  reverse  field-order  forced  (see  below).   If
       things  improve  drastically you know what the problem was
       and what the solution is!

       2.  Easy to spot: find a Channel logo or a picture or  two
       areas  with  a  sharp  horizontal boundary.  If the top or
       bottom line of an area appear to be "detached" you've  got
       this problem!

       3.   Use  "glav"  to  find  some scene changes.  If at the
       change odd lines are from one scene and even from  another
       you've captured with the wrong field order.

       How can you fix it?

       1.    To  fix  this  one  the  fields need to be "shifted"
       through  the  frames.   Use   yuvscaler's   -M   BOTT_FOR­
       WARD/TOP_FORWARD   option  can  help  here.  Or  re-record
       exchanging -f a for -F A or vice-versa. I believe the same
       applies to NTSC.

       2.    This  isn't  too bad either.  Use a tool that simply
       effect  for  efficient  MPEG  encoding  you  need  to  use


       You  can use jpeg2yuv to create a yuv stream from separate
       JPEG images.  This stream is sent to stdout,  so  that  it
       can  either  be  saved  into a file, encoded directly to a
       mpeg video using mpeg2enc or used for anything else.

       Saving an yuv stream can be done like this:
       > jpeg2yuv -f 25 -j image%05d.jpg > result.yuv

       Creates the file result.yuv containing the yuv video  data
       with 25 FPS.  The -f option is used to set the frame rate.
       Note that image%05d.jpg means  that  the  jpeg  files  are
       named image00000.jpg, image00001.jpg and so on.  (05 means
       five digits, 04 means four digits, etc.)

       If you want to encode a  mpeg  video  directly  from  jpeg
       images without saving a separate video file type:
       >  jpeg2yuv  -f  25  -j  image%05d.jpg | mpeg2enc -o mpeg­

       Does the same as above but saves a mpeg video rather  than
       a  yuv  video.  See mpeg2enc section for details on how to
       use mpeg2enc.

       You can also use yuvscaler between jpeg2yuv and  mpeg2enc.
       If you want to create a SVCD from your mpeg-video, type:
       >  jpeg2yuv  -f  25 -j image%05d.jpg | yuvscaler -O SVCD |
       mpeg2enc -f 4 -o video.m2v

       It's also useful to put yuvmedianfilter  before  mpeg2enc.
       The resulting video will be softer but a bit less sharp:
       >  jpeg2yuv  -f  25  -j  image05d*.jpg | yuvmedianfilter |
       mpeg2enc -o video.m1v

       It also depends on the quality (compression) of your  jpeg
       images whether yuvmedianfilter should be used or not.

       You  can  use the -b option to set the number of the image
       to start with.   For  example,  if  your  first  image  is
       image01.jpg rather than image00.jpg, type:
       >  jpeg2yuv  -b  1  -f  25  -j  image*.jpg  |  yuv2lav  -o

       Adding the sound to the stream then:
       > lavaddwav stream_without_sound.avi sound.wav stream.avi

       The number of images to be processed can be specified with
       the -n number.
       Other  picture formats can also be used if there is a con­
       verter to ppm:
       > ls *.tga | xargs -n1 tgatoppm | ppmtoy4m | yuvplay

       A list of filenames (ls *.tga) is given to xargs that exe­
       cutes  the tgatoppm with one (-n 1) argument per call, and
       feeds the output into ppmtoy4m. This  time  the  video  is
       only shown on the screen.
       The  xargs is only needed if the converter (tgatoppm), can
       only operate on a single image at a time.

       If you want to use the ImageMagick 'convert' tool (a Swiss
       Army Knife) try:
       > convert *.jpg ppm:- | ppmtoy4m | yuvplay

       That means take all '.jpg' images in directory, convert to
       PPM format, and pipe to stdout,  then  ppmtoy4m  processes
       them ....


       You can use lavplay or glav.

       IMPORTANT: NEVER try to run xawtv and lavplay or glav with
       hardware playback, won't work. If you want software  play­
       back it works fine.

       > lavplay -p S record.avi

       You  should see the recorded video and hear the sound. But
       the decoding of the video is done by the CPU. Your  system
       has  quite a heavy load. You don't need xawtv or anything,

       The better way:
       > lavplay -p H record.avi

       The video is decoded and played by the hardware. The  sys­
       tem load is now very low. This will play it back on-screen
       using the hardware.

       You might also try:
       > lavplay -p C record.avi

       Which will play it back using the  hardware,  but  to  the
       videooutput of the card.

       > glav record.avi

       Does  the  same  as lavplay, but you have an nice gui. The
       options for glav and lavplay are nearly the same. Using no
       to  set  up  some things lavplay and glav do not, but they
       are needed for HW-Playback. Don't forget to close xawtv !!
       NOTE2: Do not try to send glav an lavplay into background,
       wont work correct !!!
       NOTE3:  SECAM  playback  is  now   (12.3.2001)   only   in
       monochrome,  but the recording and encoding is done right.

       Coming soon: There is a tool, that makes recording  videos
       very  simple named Linux Video Studio. You can download it
       at: http://ronald.bitfreak.net


       Most tasks can be easily  done  by  glav.   Like  deleting
       parts  of  the  video,  cut  paste  and  copy parts of the
       videos.  For my part I was not in the need of  doing  any­
       thing that glav coudn't do.

       The  modifications  should  be saved because glav does not
       edit (not destructive) the  video.  This  means  that  the
       video is left untouched, and the modifications are kept in
       an extra "Edit List" file. Readable with  a  text  editor.
       This  files can be used as an input file for the lavtools,
       like lav2wav, lav2yuv, lavtrans.

       If you want to cut off the beginning and the  end  of  the
       stream  mark  the beginning and the and, and use the "save
       select" button. The edit list file is than used  as  input
       for the lavtools. If you want to split a recorded video to
       some smaller parts, simply select the parts and then  save
       each part to a different listfile.

       You can see all changes to the video and sound NOW, you do
       not need to recalculate something.

       If you want to get an "destructive" version of your edited
       video use:

       > lavtrans -o short_version.avi -f a editlist.eli
       -o    : specifies the output name
       -f a  : specifies the output format (AVI for example)
       editlist.eli  :  is  the list file where the modifications
       are described. You generate the list file with  the  "save
       all" or "save select" buttons in glav.

       Unify videos:

       > lavtrans -o stream.movtar -f m record_1.avi record_2.avi
       ... record_n.avi
       -o  : specifies the output name
       -f m: specifies the output format, movtar in this case
       > lav2wav editlist.eli > sound.wav

       Creating separate images:
       > mkdir jpg
       > lavtrans -o jpg/image%05d.jpg -f i stream.avi
       First create the directory "jpg".
       Then lavtrans will create single JPG  images  in  the  jpg
       directory  from  the  stream.avi  file.  The files will be
       named: image00000.jpg image00001.jpg ....

       Maybe interesting if you need sample  images  and  do  not
       want to play around with grabbing a single image.


       Thanks  to pHilipp Zabel's lavpipe, we can now make simple
       transitions between movies or combine multiple  layers  of

       pHilipp wrote this HOWTO on how to make transitions:

       Let's  assume  simple  this  scenery:  We  have  two input
       videos, intro.avi and epilogue.mov and want make intro.avi
       transist  into  epilogue.mov with a duration of one second
       (that is 25 frames for PAL or 30 frames for NTSC).

       intro.avi and epiloque.mov have to be of the  same  format
       regarding  frame rate and image resolution, at the moment.
       In this example they are both 352x288 PAL files. intro.avi
       contains  250 frames and epilogue.mov is 1000 frames long.

       Therefore our output file will contain:
        - the first 225 frames of intro.avi
        - a 25 frame transition containing the last 25 frames  of
          and the first 25 frames of epilogue.mov
        - the last 975 frames of epilogue.mov

       We could get the last 25 frames of intro.avi by calling:
       > lav2yuv -o 225 -f 25 intro.avi
       -o  225,  the  offset, tells lav2yuv to begin with frame #
       225 and -f 25 makes it output 25 frames from there on

       Another possibility is:
       > lav2yuv -o -25 intro.avi
       Since negative offsets are counted from the end.

       And the first 25 frames of epilogue.mov:
       >l av2yuv -f 25 epilogue.mov
       -o defaults to an offset of zero
       An opacity of 0 means that  the  second  stream  is  fully
       transparent  (only  stream one visible), at 255 stream two
       is fully opaque.
       In our case the correct call (transition from stream 1  to
       stream 2) would be:
       > transist.flt -o 0 -O 255 -d 25
       The -s and -n parameters equal to the -o and -f parameters
       of lav2yuv and are only needed if anybody wants to  render
       only  a  portion  of  the  transition for whatever reason.
       Please note that this only affects the weighting  calcula­
       tions  -  none  of the input is really skipped, so that if
       you pass the skip parameter (-s 30, for example), you also
       need  to  skip  the  first 30 frames in lav2yuv (-o 30) in
       order to get the expected result. If you didn't understand
       this, send an email to the authors or simply ignore -s and
       The whole procedure will be automated later, anyway.

       Now we want to compress the yuv stream with yuv2lav.
       >yuv2lav -f a -q 80 -o transition.avi Reads the yuv stream
       from  stdin and outputs an avi file (-f a) with compressed
       jpeg frames of quality 80.

       Now we have the whole command for creating a transition:

       >ypipe "lav2yuv -o 255 -f 25  intro.avi"  "lav2yuv  -f  25
       epilogue.mov"  |  transist.flt -o 0 -O 255 -d 25 | yuv2lav
       -f a -q 80 -o transition.avi

       (This is one line.) The resulting video can be written  as
       a  LAV Edit List, a plain text file containing the follow­
       ing lines:

       LAV Edit List
       0 0 224
       1 0 24
       2 25 999

       This file can be fed into glav or lavplay, or you can pipe
       it  into  mpeg2enc with lav2yuv or combine the whole stuff
       into   one   single   mjpeg   file   with   lavtrans    or


       -  will create a mpeg2 with default bitrate in same res as

       However, better results can be accomplished by trying  out
       various options and find out which one works best for you.
       These are discussed below.

       The creation of MPEG1 movies is explained with more  exam­
       ples,  and  with more details because most things that can
       be used for MPEG1 also fit for the other output formats.

       For the creation of VCD/SVCD Stills sequences (-f 6, -f  7
       in mpeg2enc) you have to look at: http://www.mir.com/DMG/
       Still  sequences  are  needed for the creation of menus in
       VCD/SVCD. The creation of menus is described in  the  doku
       of vcdimager


       MPEG-1  videos  need  MPEG1-layer2 sound files. For MPEG-2
       videos you can use MPEG1-Layer2  and  MPEG1-Layer3  (MP3).
       But  you  should stick to MPEG1-Layer2 because most of the
       MPEG2 players (DVD Player for example usually the  differ­
       ent  Winxx Versions have great problems with this too) are
       not able to play MPEG2-Video and MPEG1-Layer3 sound.

       mp2enc is a MPEG layer 2 Audio encoder.
       The toolame encoder is also able  to  produce  a  layer  2
       file. You can use that one as well.
       For mp3 creation I'm sure you have an encoder.

       > lav2wav stream.avi stream1.avi | mp2enc -o sound.mp2

       This  creates  a  mpeg-2  sound file out of the stream.avi
       with 224kBit/sec bitrate. You can specifie more files, and
       also  use the placeholder %nd.  Where n describes the num­

       Another example:
       > cat sound.wav | mp2enc -v 2 -V -o sound.mp2

       This  creates  an  VCD  (  bitrate=224,  stereo,  sampling
       rate:44100) compatible output from the wav file.
       With  -v  2 mp2enc is more verbose, while encoding you see
       the sec of audio already encoded.

       You can test the output with:
       > plaympeg sound.mp2

       NOTE: plaympeg is a MPEG1 Player for Linux,  you  can  use
       other players as well, for MPEG audio testing you can also
       This  creates  an  video  file with the default bitrate of
       1152kBit/sec. This is the bitrate you need if you want  to
       create VCDs.

       >  lav2yuv  -d  2  stream*.avi | mpeg2enc -b 1500 -r 16 -o

       There lav2yuv drops the 2 lsb (Less Significant  Byte)  of
       the  each  pixel.  Then  mpeg2enc  creates  a video with a
       bitrate of 1500kBit/s uses an search radius  of  16.  That
       when trying to find similar 16*16 macroblocks of pixels in
       between frames the encoder looks up to 16 pixels away from
       the current position of each block.  It looks twice as far
       when comparing frames 1 frame apart and so on.  Reasonable
       values  are  16  or  24.  The  default is 16 so adding the
       option here is silly. Lower values  (0,  8),  improve  the
       encoding  speed  but  you  get lower quality (more visible
       artifacts), higher values (24, 32) improve the quality  at
       the  cost  of  the  speed.   With  the file description of
       stream*.avi all files are processed that match  this  pat­
       tern beginning with 00, 01....

       Using  yuvscaler  one  can now also scale the video before
       encoding it. This can be useful for users with a  DC10  or
       DC10+ cards which captures at -d 1 768x576 or -d 2 384x288
       (PAL/SECAM) or -d 1 640x480 (NTSC).  These sizes cannot be
       scaled  right  with  the  -s option from lav2yuv to VCD or
       SVCD format. It is only scaled  right  with  lav2yuv  when
       using a Buz or LML33 card.

       You get a full description of all commands starting:
       >yuvscaler -h

       >lav2yuv  stream.avi  |  yuvscaler  -O  VCD  | mpeg2enc -o

       This will rescale the 384x288 or  768x576  (PAL/SECAM)  or
       352x240  or  640x480 (NTSC) stream to the VCD-size 352x288
       (PAL/SECAM) or 352x240 (NTSC)  and  encode  the  resulting
       output YUV data to an mpeg stream.

       It  can  also do SVCD-scaling to 480x480 (NTSC) or 480x576
       >lav2yuv stream.avi | yuvscaler -O  SVCD  -  M  BICUBIC  |
       mpeg2enc -o video.m2v

       The  mode  keyword (-M) forces yuvscaler to use the higher
       quality bicubic algorithms for  downscaling  and  not  the
       default  resample algorithms.  Upscaling is always done by
       the bicubic algorithms.
       > plaympeg video.m1v

       Note: This are only examples. There are more  options  you
       can  use. You can use most of them together to create high
       quality videos with the lowest possible bitrate.
       Note2: The higher you set the search radius the longer the
       conversion  will  take.  In  common  you  can say the more
       options used the longer it takes.
       Note3: MPEG1 was  not  designed  to  be  a  VBR  (variable
       bitrate  stream)  !!  So if you encode with -q 15 mpeg2enc
       sets the maximal bitrate -b to 1152.  If you  want  a  VBR
       MPEG1 you have to set -b very high (2500).
       Note4:  Maybe you should give better names than video.m1v.
       A good idea would be if you see the filename you know  the
       options you've used.  (Ex: video_b1500_r16_41_21.m1v)
       Another  possibility  is  to  call  all  the layer 2 files
       ".mp2" all the MPEG-1 video files ".m1v"  and  all  MPEG-2
       video files ".m2v".  Easy to see what's happening then.
       Reserve .mpg for multiplexed MPEG-1/2 streams.


       > mplex sound.mp2 video.m1v -o my_video.mpg

       Puts  the  sound.mp2  and the video.m1v stream together to

       Now you can use your preferred MPEG player and  watch  it.
       All  players  based  on the SMPG library work well.  Other
       Players are: xmovie, xine, gtv, MPlayer for example.

       Note: If you have specified the  -S  option  for  mpeg2enc
       mplex  will  automatically  split the files if there is in
       the output filename a %d (looks like: -o  test%d.mpg)  The
       files  generated  this  way  are separate stand-alone MPEG

       Note: xine might  have  a  problem  with  seeking  through

       Variable  bit-rate  multiplexing:  Remember  to tell mplex
       you're encoding VBR (-V option) as well as  mpeg2enc  (see
       the  example  scripts).   It *could* auto-detect but it is
       not working yet.  You should tell  mplex  a  video  buffer
       size  at  least  as  large  as  the  one  you specified to
       "mpeg2enc".  Sensible numbers for MPEG-1 might be a  ceil­
       ing bit- rate of 2800Kbps, a quality ceiling (quantization
       floor) of 6 and a buffer size of 400K.

       >  mplex   -V   -r   1740   audio.mp2   video_vbr.m1v   -o


       For  MPEG1 you can use mpeg layer 2 Audio and mpeg1 video.
       A subset of MPEG1 Movies are VCD's.
       You can use VBR (Variable BitRate) for the Video, but  the
       Audio has to be CBR (Constant BitRate).

       MPEG1  is  recommended for picture sizes up to 352x288 for
       PAL and 352x240 for NTSC for larger  sizes  MPEG2  is  the
       better choice.
       There  is  no  exact  line till where MPEG1 is better than

       Audio creation Example:
       > lav2wav editlist.eli | mp2enc -o sound.mp2

       This will fit the MPEG1 quite well. You can save some  Bit
       when  telling  to use a lower bitrate (-b option) like 160
       or 192 kBit/s

       > lav2wav editlist.eli | mp2enc -b 128 -m -o sound.mp2

       This creates a mono output with an bitrate of  128kBit/sec
       bitrate.   The  input  this  time is the editlistfile (can
       have any name) created with glav, so all changes you  made
       in  glav  are  direct processed and handed over to mp2enc.
       So you do NOT have to create an edited  stream  with  lav­
       trans to get it converted properly.

       Video Creation Example:

       >  lav2yuv -n 1 editlist.eli | mpeg2enc -b 2000 -r 24 -q 6
       -o video.m1v

       There lav2yuv applies  a  low-pass  noise  filter  to  the
       images.  Then mpeg2enc creates an video with an bitrate of
       2000kBit/s (or 2000000Bit/s) but the -q flag activates the
       variable  bitrate and an quality factor of 6. It also uses
       a search radius of 24. An editlistfile used.

       when mpeg2enc is invoked without the 'q' flag  it  creates
       "constant  bit-rate"  MPEG streams.  Where (loosely speak­
       ing) the strength of compression (and hence picture  qual­
       ity)  is  adjusted to ensure that on average each frame of
       video has exactly the specified number of bits.  Such con­
       stant bit-rate streams are needed for broadcasting and for
       low-cost hardware like DVD and VCD players which use  slow
       fixed-speed player hardware.

       > lav2yuv -a 352x240+0+21 stream.avi | mpeg2enc -b 1152 -r
       16 -4 1 -2 1 -o video.m1v

       Usually there is at the top and at  the  bottom  a  nearly
       black  border and a lot of bandwidth is used for something
       you do not like. The -a option sets everything that is not
       in   the  described  area  to  black,  but  the  imagesize
       (352x288) is not changed.
       So you have a real black border the encoder  only  uses  a
       few  bits  for  encoding them. You are still compatible to
       VCD's for this example.
       To determine the active window extract one  frame  to  the
       jpeg format:
       > lavtrans -f i -i 100 -o frame.jpg test.avi

       Edit  it  with an grafic program, and cut out the borders,
       and you have the active size.

       The -4 1 and -2 1 options improve the  quality  about  10%
       but conversion is slower.

       At  the  size of 352x288 (1/2 PAL size, created when using
       the  -d  2  option  when  recording)  the  needed  bitrate
       is/should be between 1000 - 1500kBit/s.

       Anyways,  the  major factor is quality of the original and
       the degree of filtering. Poor quality unfiltered  material
       typically  needs a higher rate to avoid visible artefacts.
       If you want to reduce bit-rate without annoying  artifacts
       when  compressing  broadcast  material  you should try the
       noise filters. This are for  lav2yuv:  -n  [0..2]  and  -d
       [0..3].  There are other filter which are described later.

       > lav2yuv stream.avi | mpeg2enc -b 1500 -n s -g 6 -G 20 -P
       -o video.m1v

       Here the stream.avi will be encoded with:
       -b 1500    : a Bitrate of 1500kBit/sec
       -n s       : the input Video norm is forced to SECAM
       -P          : This ensures, that 2 B frames appear between
       adjacent I/P frames.Several common MPEG-1  decoders  can't
       handle  streams  where less than 2 B-frames appear between
       I/P frames.
       -g 6 -G 20 : the encoder can dynamically size  the  output
       streams  group-of-pictures  to reflect scene changes. This
       is done by setting a maximum GOP  (-G  flag)  size  larger
       than the minimum (-g flag).
       For  VCDs  sensible  values  might be a minimum of 9 and a
       maximum of 15. For SVCD 6 and 18 would be good values.  If
       you  only  want  to  play it back on SW player you can use
       specifies  the  non-video  (audio  and  mplex information)
       bitrate.  The -B value of 260 should  be  fine  for  audio
       with  224kBit and mplex information.  For further informa­
       tion take a look at the encoding scripts  in  the  scripts

       Multiplexing Example

       > mplex sound.mp2 video.m1v -o my_video.mpg

       Puts  the  sound.mp2  and the video.m1v stream together to
       my_video.mpg. It only works that easy if you have CBR (not
       used the -q option).

       >   mplex   -V   -r   1740   audio.mp2   video_vbr.m1v  -o

       Here we multiplex a variable bitrate stream. mplex is  now
       a  single  pass multiplexer so it can't detect the maximal
       bitrate and we have to specify it.  The data rate for  the
       output stream is: audio bitrate + peak videobitrate + 1-2%
       for mplex information. If  audio  (-b  224)  has  224kBit,
       video has 1500kBit (was encoded with -b 1500 -q 9) then we
       have 1724 * 1.01 or about 1740kBit.


       MPEG2 is recommended for sources with  a  greater  picture
       than  352x240 for NTSC and 352x288 for PAL. MPEG2 can also
       handle interlaced sources like recording from TV  at  full

       MPEG2 allows the usage of mpeg layer 3 (mp3) sound. So you
       can use your favorite mp3encoder for the creation  of  the
       sound. The audio can also be a VBR Stream.

       MPEG2  is  usually a VBR Stream. MPEG2 creation with opti­
       mizing needs a lot of CPU power. But a film with the  dou­
       ble  resolution  does  NOT  is  not 4 times larger than an
       MPEG1 Stream. Depending on your quality settings  it  will
       be about 1.5 up to 3 times larger than the MPEG1 Stream at
       the half resolution.

       Audio creation Example

       > lav2wav editlist.eli | mp2enc -o sound.mp2

       This will fit the MPEG2 quite well. You can save some  Bit
       when  telling  to use a lower bitrate (-b option) like 160
       or 192 kBit/s
       option because you usually want a space saving VBR Stream.
       When using VBR Streams the -b  option  tell  mpeg2enc  the
       maximum  bitrate  that  can  be  used.  The -q option tell
       mpeg2enc which quality the streams should have.

       > lav2yuv editlist.eli | mpeg2enc -f 3 -4 1 -2  1  -q7  -b
       4500 -V 300 -P -g 6 -G 18 -I 1 -o video.m2v

       This is more like a high quality MPEG2 Stream. The -4 1 -2
       1 option make a bit better quality. With -b 4500 -q 7  you
       tell  mpeg2enc the maximal bitrate and the Quality factor.
       -V is the video buffer size used for decoding the  stream.
       For  SW  playback  it can be much higher that the default.
       Dynamic GOP set with -g -G, A larger  GOP  size  can  help
       reduce  the bit-rate required for a given quality.  The -P
       option also ensures that 2 B frames appear  between  adja­
       cent  I/P frames.  The -I 1 option tells mpeg2enc that the
       source is a interlaced material like videos. There is time
       consuming  interlace-adapted motion compensation and block
       encoding done. mpeg2enc will switch to this  mode  if  the
       size  of the frames you encode is larger than the VCD size
       for you TV Norm.

       If you denoise the images  with  yuvdenoise  and  use  the
       deinterlacing (-F) built in there you should tell mpeg2enc
       that it does not need to do motion estimation  for  inter­
       laced  material.  You have to set the mpeg2enc -I 0 option
       to tell that the frames  are  already  deinterlaced.  This
       will  save a lot of time when encoding. If you don't do it
       it will cause no other drawbacks.

       You  can  also  use  scaling  an  options  that   optimize
       (denoise)  the  images,  to  get smaller streams. But this
       options are explained in deep in the according sections.

       Which values should be used for VBR Encoding (-q)

       The -q option controls the  minimum  quantization  of  the
       output  stream.   Quantization  controls   the   precision
       with  which image information is encoded. The  higher  the
       value the better the image quality.

       Usually  you  have to set up a maximum bitrate with the -b
       option. So the tricky task is to set a value  for  the  -q
       option  and the -b option that produces a nice movie with­
       out using to much bandwidth and not much artefacts.

       A Quality factor should be chosen that way that the  mplex
       output  of  Peak  bit-rate and Average bit-rate differ for
       about 20-25%.

       If the the difference is very small  <  10%,  it  is  very

       The same (7-9) quality factor for a full size picture  and
       a  top  bitrate of 3500 to 4000 kBit won't produce to much
       artefacts too.

       For  SVCD/DVD  you  can  expect  a  result  like  the  one
       described if the maximal bitrate is not set too low:
       q <= 6 real sharp pictures, and good quality
       q <= 8 good quality
       q <= 10 average quality
       q >= 11 not that good
       q >= 13 here even still sequences might look blocky

       Multiplexing Example

       >  mplex  -f  3  -b  300 -r 4750 -V audio.mp3 video.mp3 -o

       Now both streams are multiplexed, a mp3 audio and a  mpeg2
       video.  You  have to use the -f 3 option to tell mplex the
       output format. You also have to add the -b decoder buffers
       size with the same value used when encoding the video.  -r
       is that rate of video + audio +1-2% of mplex  information.
       The -V option tells that your source for mplexing is a VBR
       stream. If you don't use this option mplex  creates  some­
       thing  like a CBR Stream with the bitrate you have told it
       with the -r option. An this streams usually get BIG.


       VCD is a cut down version of  MPEG1  streams.   There  are
       some limitations on VCD's. Like bitrate for video 1152kBit
       and for audio layer 2 audio with 224kBit stereo.  You  are
       not  allowed  to  use the -q option, dynamic GOP the video
       buffer is limited to 46kB.
       The image size is limited to  352x240  for  NTSC,  and  to
       352x288 for PAL.

       If you have no VCD player, and you plan to play it back on
       your DVD player for example. You DVD player might be  that
       flexible  to  allow  larger  bitrates, dynamic GOP, larger
       video buffer and so on.

       Audio creation Example:
       > lav2wav stream.avi | mp2enc -V -o sound.mp2

       -V force VCD compatible output (same as: -b 224  -r  44100
       -s) For hardware players, you should stick to 44.1 224kBps
       Stereo layer 2 Audio.

       Video creation Example:
       > lav2yuv stream.avi | yuvscaler -O VCD | mpeg2enc -f 1 -r
       non-video  (audio  and mplex information) bitrate.  The -B
       value of 260 should be fine for  audio  with  224kBit  and
       mplex information.  For further information take a look at
       the encoding scripts in the scripts directory. So the mul­
       tiplexed streams should easily fit on a CD with 650MB.

       The  default  value  (-B) is 700MB for the video. mpeg2enc
       marks automatically every stream at that size  if  the  -B
       does not set anything else. If you have a CD where you can
       write more MB like 800, you have to set the -S option else
       mpeg2enc  will  mark  the stream at 700 MB, and mplex will
       split the stream there.  Which might not be what you want.

       Multiplexing Example
       > mplex -f 1 sound.mp2 video.m1v -o vcd_out.mpg

       The  -f 1 option turns on a lot of weird stuff that other­
       wise has no place in a respectable multiplexer!

       Creating the CD: The multiplexed streams have to  be  con­
       verted  to  an  VCD compatible.  This is done by vcdimager

       > vcdimager testvideo.mpg

       Creates a videocd.bin, the data file,  and  a  videocd.cue
       which is used as control file for cdrdao.

       You  use  cdrdao  to burn the image. Cdrdao is yet another
       fine Sourceforge
       project which is found at: http://cdrdao.sourceforge.net/

       For MPEG-1 encoding a typical  (45  minute  running  time)
       show  or  90  odd  minute movie from an analog broadcast a
       constant bit-rate of around 1800 kBit/sec should be ideal.
       The  resulting  files are around 700M for 45 minutes which
       fits nicely as a raw XA MODE2 data track on a CD-R.

       For pure digital sources (DTV or DVD streams and  similar)
       VCD 1152 works fine.

       Note:  If  you encode VBR MPEG1 (-q) remember the Hardware
       was probably not designed to do the playback   because  it
       is  not  in the specifications. If it works be very happy.
       I've notices that it helps when you have an  MPEG1  Stream
       to  tell vcdimager that it is an SVCD. Vcdimager complains
       (but only with a warning and not  a  fatal  error)but  you
       should  be able to burn it. This could convince the player
       to use an other firmware and play  it  back  correct,  but
       there is no guarantee for that.
       If your player doesn't support SVCD you may well  find  it
       can handle VCD streams that have much higher than standard
       bit-rates. Often as much as 2500kBit/sec is possible.  The
       AudioVox  1680 for example can handle 2500kBit/s VCD rates
       (it also handles VCDs with VBR MPEG-1  but  other  players
       might not be so forgiving).

       With  higher bit-rates and good quality source material it
       is worth trying mpeg2enc's -h flag which produce a  stream
       that  is  as  sharp as the limits of the VCD standard per­
       mits. The -h flag seems to help also if  there  is  a  low
       quality  stream.  The video does not look that sharp using
       the flag but there are not that much glitches  as  without

       However,  if  your  player  supports  it  and you have the
       patience for the much longer encoding times SVCD is a much
       better  alternative.   Using  a more efficient MPEG format
       SVCD more than doubles VCD's  resolution  while  typically
       producing files that are rather less than twice as big.


       Record at full TV resolution (means: -d 1  for PAL this is

       Audio creation Example:
       > lav2wav stream.avi | mp2enc -v -o sound.mp2

       NOTE: The SVCD specifications permit a much  wider  choice
       of  audio  rates, it is not necessary to use 224 kBit/sec.
       Any audio rate between 32 and 384 kBit/sec  is  permitted.
       The audio may be VBR (Variable Bit Rate).

       Video Creation Example:
       >  lav2yuv  stream.avi | yuvscaler -O SVCD | mpeg2enc -f 4
       -q 7 -I 1 -V 200 -o video.m2v

       -f 4 sets the options for mpeg2enc to SVCD
       -q 7 tell mpeg2enc to generate a variable bitrate stream
       -I 1 tell mpeg2enc to assume that the original  signal  is
       field  interlaced  video  where the odd rows of pixels are
       sampled a half frame interval after the even ones in  each
       frame.  The  -I 0 (progressive output (no field pictures))
       option will also work for PAL.

       You can use lower bitrates, but the SVCD  standard  limits
       total  bit-rate  (audio  and video) to 2788800 Bit/sec. So
       with 224Kbps audio  and  overheads  2550  may  already  be
       marginally  too  tight.  Since the SVCD format permits any
       audio rate between 32 and 224 kBit/sec you can save a  few
       bits/sec by using 192k audio.
       Movies are shot on film at 24 frames/sec.  For PAL  broad­
       cast  the  film  is simply shown slightly "too fast" at 25
       frame/sec (much to the pain of  people  with  an  absolute
       pitch  sense  of  pitch).   The  -I  0  flag turns off the
       tedious calculations needed to compensate for field inter­
       lacing giving much faster encoding.

       Unfortunately,  movies  broadcast  in  NTSC  (US  style 30
       frames/60 fields sec) video this will  produce  very  poor
       compression.   The  "pulldown" sampling used to produce 60
       fields a second from a 24 frame a second movie means  half
       the frames in an NTSC *are* field interlaced.

       For  SVCD  encoding,  you can of course also use yuvscaler
       for the downscaling rather than letting mpeg2enc do  that.

       Don't  forget  the  -S and -B options mentioned above. You
       want that the stream fits on the CD don't you ?

       Multiplexing Example

       > mplex -f  4  -b  300  -r  2750  sound.mp2  video.m2v  -o

       -f 4    tells mplex to encode a SVCD,

       -r 2750 is  the  calculated  Audio  + Video Bitrate + 1-2%
               multiplex information

       -b 300  is the Buffer aviable on the playback device,  the
               same  used  for  the  video encoding (there the -V

       Creating the CD:

       > vcdimager -t svcd testvideo.mpg

       Creates an videocd.bin, the data file, and  a  videocd.cue
       which is used as control file for cdrdao.

       Use cdrdao to burn the image as mentioned earlier.

       Note: If you want to build "custom" VCD/SVCD you will need
       to use the mplex -f 2 and -f 5 switches.

       Note: The VCD SVCD stuff may work on  your  HW  player  or
       not.  There  are  many  reports  that it works quite well.
       Don't be worried if it does not work. Nor am I responsible
       The output is created in a single pass.  You can  use  the
       normal  video  encoding  process.  The  audio file, or the
       sound that is already in the recorded avi, has to be given
       yuv2divx  as  an  option.   You can specify a WAV file, an
       edit list, or, as in this case, the same file  we're  get­
       ting the video from.

       Enough talk here is an example:
       >   lav2yuv   stream.avi   |  yuv2divx  -A  stream.avi  -o

       Looks a bit strange because yuv2divx reads the  YUV  video
       stream  from  stdin  but  has  the  audio  passed in by an
       option. The -A specifies the audio stream.  In this  case,
       it  is  also in our stream.avi.  The output is also a .avi
       file because Divx is also named avi. The  .divx  extension
       is also sometimes used.

       >   lav2yuv   stream.avi   |  yuvdenoise  |  yuvscaler  -O
       SIZE_640x480  |  yuv2divx  -b  2500  -a  196  -E  DIV5  -A
       stream.avi -o output.avi

       You  see  that  using other tools to get the stream in the
       proper form is  no  problem.   Here  we  set  the  maximum
       bitrate to 2500kBit/s, and the audio bitrate to 192kBit/s.
       The video coded this time used is the DIV5.

       A bitrate of 2500  is  considered  rather  high  for  Divx
       encoding.  Divx encoding offers its greatest utility (giv­
       ing decent images at high compression)  at  bitrates  that
       MPEG2  encoding would generate poor quality.  Experimenta­
       tion is encouraged, of course, but  the  general  rule  of
       thumb  appears  to  be that Divx offers similar quality at
       two-thirds of the bitrate of MPEG1/2  till  a  bitrate  of
       1500kBit.  From  a  bitrate  of 1500-2500 kBit the gain of
       using Divx (=  MPEG-4)  compared  to  MPEG1/2  is  getting
       smaller.   Note  that  this  is  video bitrate only, audio
       bitrate remains the same as the same type of  encoding  is
       used for both.


       If  you  do  not need to perform any audio or video alter­
       ation of the audio or video streams,  you  can  perform  a
       conversion to divx in one step using the lav2divx utility.

       lav2divx -b 1000 -E DIV5 -o output.avi input.editlist

       This takes the edit list and uses that as the  source  for
       both  the video and the audio.  Since it's all done in one
       step, there's no opportunity to use any of  the  YUV  fil­


       Using filters helps to increase the  image  quality  using
       fixes  bitrate video streams. With VBR (variable bit rate)
       video the filesize is reduced.

       > lav2yuv  stream.avi  |  yuvmedianfilter  |  mpeg2enc  -o

       Here  the  yuvmedianfilter  program is used to improve the
       image. This removes some of low  frequence  noise  in  the
       images.  It  also  sharpens the image a little. It takes a
       center pointer avg the pixels around it that fall with the
       threshold. It then replaces the center pixel with this new
       value. You can also use the  -r  (radius)  option  for  an
       other  search  radius.  Use -t to control the threshold of
       the pixel count in the agv.  The defaults -r 2  and  -t  2
       look good.

       > lav2yuv stream.avi | yuvdenoise | mpeg2enc -o video.m1v
       Now we are using yuvdenoise to improve the image. The fil­
       ter mainly reduces color and luminance-noise and  flicker­
       ing due to phase errors.
       If  you want yuvdenoise also to deinterlace the stream use
       the -F option.

       > lav2yuv stream.avi  |  yuvkineco  -F  1  |  mpeg2enc  -o
       yuvkineco  is used for NTSC sources. It does the conversa­
       tion from 29.97 fps to 23.976fps, you can call it "reverse
       2-3 pulldown", more info about this in the README.2-3pull­
       down. yuvkineco does only remove NTSC  specific  problems.
       So  if  you  want to improve the image you should also use
       > lav2yuv stream.avi | yuvkineco | yuvdenoise  |  mpeg2enc
       -o video.m1v

       > lav2yuv stream.avi | yuvycsnoise | mpeg2enc -o video.m1v
       yuvycsnoise is also used for NTSC and is  specialized  for
       NTSC  Y/C  separation noise. If video capture hardware has
       only a poor Y/C separator then at vertical stripes  (espe­
       cially red/blue) noises appear which seem checker flag and
       bright/dark invert per 1 frame. yuvycsnoise reduces noises
       of  this  type.  You can also use different thresholds for
       luma/chroma and the optimizing method.
       yuvycsnoise works only correct when we have NTSC with:
         * full height (480 lines)
         * full motion captured (29.97 fps)
         * captured with poor Y/C separator hardware


       For transcoding existing MPEG-2 streams  from  digital  TV
       cards  or  DVD  a still lower data-rate than for broadcast
       will give good results. Standard VCD 1152  Kbps  typically
       works  just  fine for MPEG1. The difference is in the Sig­
       nal/Noise ratio of the original.  The noise in the  analog
       stuff makes it much harder to compress.

       You will also need to manually adjust the audio delay off­
       set relative  to  video  when  multiplexing.   Very  often
       around 150ms delay seems to do the trick.

       You have to download the ac3dec and mpeg2dec packages. You
       can  find   them   at   mjpeg   homepage   (http://source­
       forge.net/projects/mjpeg).  You  also need sox and toolame
       if you want to use the script.

       In the scripts directory there is a transcode script  that
       does most of the work.

       So transcoding looks like this:
       > transcode -V -o vcd_stream mpeg2src.mpg

       -V :  set's the options so that a VCD compatible stream is

       -o vcd_stream: a vcd_stream.m1v (video) and vcd_stream.mp2
                      (audio) is created

       mpeg2src.mpg:  specifies the source stream

       The script prints also something like this:
       > SYNC 234 mSec

       You  will  need  to  adjust the audio/video startup delays
       when multiplexing to ensure audio and video  are  synchro­
       The exact delay (in milliseconds) that you need to pass to
       mplex to synchronize audio and video  using  the  "-O"  is
       printed  by  the  extract_ac3 tool labeled "SYNC" when run
       with the "-s" flag.

       Then you need to multiplex them like this:
       > mplex -f  1  -O  234  vcd_stream.mp2  vcd_stream.m1v  -o

       -f 1 :   Mux format is VCD

       -O 234 : Video  timestamp offset in mSec, generated by the
       because movies may be recorded with 3:2 pulldown NTSC with
       60 fields/sec. mpeg2dec is designed for playback  on  com­
       puters and generates the original 24frames/sec bitrate. If
       you encode the video now 30frames/sec  video  is  created.
       This video is now much too short for the encoded audio.
       The  transcoding  can  be made to work but it must be done

       > cat mpeg2src.mpg | mpeg2dec -s  -o YUVs | mpeg2enc -I  0
       -f 4 -q 9 -V 200 -b 2500 -p -o svcd_stream.m2v

       The -p tells mpeg2enc to generate header flags for 32 pull
       down of 24fps movie.  It may also work if you do  not  add
       the -p flag.

       You do not need the -p flag when transcoding to VCD format
       because it is not supported in mpeg1.

       If you want to do every step on your own it  has  to  look
       like this:

       Extracting Audio:
       >  cat  test2.mpg  |  extract_ac3  - -s | ac3dec -o wav -p
       sound.wav 2>/dev/null

       One of the first lines showed contains  the  label  "SYNC"
       you have to use this time afterwards for the multiplexing.
       The  2>/dev/null  redirects  the  output  of   ac3dec   to
       /dev/null.  In  the  next step you generate the mpeg audio

       > cat sound.wav | mp2enc -V -v 2 -o audio.mp2

       -V :  forces VCD format, the sampling rate is converted to
             44.1kHz  from 48kHz

       -v 2: unnecessary  but  if you use it mp2enc tells you how
             many seconds of the Audio file are already  encoded.

       Specifies the output file.

       The other version uses sox and toolame in a singe command:
       >  cat  test2.mpg | extract_ac3 - -s | ac3dec -o wav | sox
       -t wav /dev/stdin -t wav -r 44100 /dev/stdout | toolame -p
       2 -b 224 /dev/stdin audio.mp2

       One  of  the first lines output contains the label "SYNC".
       You have to use this  time (referred  to  as  "SYNC_value"
       below) when doing the multiplexing.

       You  can  generate  VCD  and  SVCD videos, and own mpeg1/2

       here are other output modes, try "mpeg2dec --help" but the
       most important here are:

       YUV :  is the full image size

       YUVs : is SVCD size

       YUVh : is VCD size

       Mplex with:
       >  mplex  -f  1  -O  SYNC_value audio.mp2 video_vcd.m1v -o

       -f 1 : generates an VCD stream

       For SVCD creation use:
       > cat test2.mpg | mpeg2dec -s -o YUVs | mpeg2enc -f 4 -q 9
       -V 200 -o video_svcd.mpg

       -f 4 :   Set options for MPEG 2 SVCD

       -q 9 :   Quality   factor  for  the  stream  (VBR  stream)
                (default q: 12)

       -V 200 : Target video buffer size in KB

       -o :     Output file

       Mplex with:
       > mplex -f 4  -b  200  -r  2755  audio.mp2  video_svcd  -o

       -f 4 :    generate an SVCD stream

       -b 200 :  Specify  the  video buffer also used while video

       -r 2755:

       Specify data rate of output stream in kbit/sec

       For other video output formats this might work:
       > cat test2.mpg |  mpeg2dec  -s  -o  YUV  |  yuvscaler  -O
       SIZE_320x200    -O    NOT_INTERLACED    |    mpeg2enc   -o
       Found stream id 0xBE  = Padding Stream

       Extract audio with:
       > bbdmux myvideo.mpg 0xC0 audio.mp1

       Convert it to wav:
       > mpg123 -w audio.wav audio.m1v

       Extract video with:
       > bbdmux myvideo.mpg 0xE0 video.m1v

       Converting video to an mjpeg avi stream:
       > cat video.m1v | mpeg2dec  -o  YUV  |  yuv2lav  -f  a  -o

       Then adding the sound to the avi:
       > lavaddwav test.avi audio.wav final.avi

       If  the  source  video  has already the size of the target
       video use -o YUV. Using YUVh  makes  the  video  the  half

       The  rest can be done just like editing and encoding other

       If you have videos with ac3 sound you only have  to  adapt
       the commands above.

       Extracting Audio:
       >  cat  test2.mpg  |  extract_ac3  -  -s  |  ac3dec -o wav
       2>/dev/null >sound.wav

       Extract video:
       > cat test2.mpg | mpeg2dec -s -o YUVh | yuv2lav -f a -q 85
       -o test.avi

       Adding the sound:
       > lavaddwav test.avi sound.wav fullvideo.avi

       NOTE: You need much disk space. 1GB of video has a size of
       about 2GB at SVCD format and of course space is needed for
       some  temp files. Converting the video to mjpeg also takes
       some time.

       On my Athlon 500 I never get more than 6-7 Frames  a  sec­
       You  loose  quality each time you convert a stream into an
       other format !


       If absolute quality is your objective a modest improvement
       rounded to the nearest multiple of 8. Furthermore, on mod­
       ern CPU's the speed gained by reducing the radius below 16
       is  not large enough to make the  marked quality reduction
       worthwhile for most applications.

       Creating streams to be played  from  disk  using  Software

       Usually  MPEG  player  software is much more flexible than
       the hardware built into DVD and VCD players.  This  flexi­
       bility  allows  for significantly better compression to be
       achieved for the same quality.  The trick is  to  generate
       video  streams  that use big video buffers (500KB or more)
       and  variable  bit-rate  encoding  (the  -f,  -q  flag  to
       mpeg2enc). Software players will often also correctly play
       back the much more efficient MPEG  layer  3  (yes,  "MP3")
       audio  format.  A  good Mp3 encoder like lame will produce
       results comparable to layer 2 at  224Kbps  at  128Kbps  or


       The  degree  to which mpeg2enc tries to split work between
       concurrently executing threads is  controlled  by  the  -M
       oder   --multi-thread   [0..32]  option.   This  optimizes
       mpeg2enc for the specified number of CPUs. By default  (-M
       1),  mpeg2enc  runs  with  just  a little multi-threading:
       reading of frames happens concurrently  with  compression.
       This  is  done  to allow encoding pipelines that are split
       across several machines (see below)  to  work  efficiently
       without the need for special buffering programs.
       If  you  are encoding on a single-CPU machine where RAM is
       tight you may find turning off  multithreading  altogether
       by setting -M 0 works slightly more efficiently.

       For  SMP  machines  with  two  ore more processors you can
       speed up mpeg2enc by setting the  number  of  concurrently
       executing  encoding threads's you wish to utilize (e.g. -M
       2). Setting -M 2 or -M 3 on a 2-way machine  should  allow
       you to speed up encoding by around 80%.

       Obviously, if your encoding pipeline contains several fil­
       tering stages it is likely that you can keep two  or  more
       CPU's busy simultaneously even without using -M. Denoising
       using yuvdenoise is particular demanding and  uses  almost
       as much processing power as MPEG encoding.

       It  you  more  than  one  computer  you can also split the
       encoding pipeline between  computers  using  the  standard
       "rsh"  or  "rcmd"  remote  shell  execution commands.  For
       example, if you have two computers:

       decoding and scaling, the next could do denoising and  the
       third could do MPEG encoding:

       > rsh machine1 lav2yuv mycapture.eli | yuvscaler -O SVCD |
       yuvdenoise | rsh machine3 mpeg2enc -f 4 -o mycapture.m2v

       Note how the remote command executions are set up so  that
       the  data is sent direct from the machine that produces it
       to the machine that consumes it.

       In practice for this to be worthwhile the network you  are
       using  must be fast enough to avoid becoming a bottleneck.
       For Pentium-III class machines or above you  will  need  a
       100Mbps  Ethernet.     For really fast machines a switched
       100MBps Ethernet (or better!) may be needed.Setting up the
       rshd ("Remote SHell Daemon") needed for rsh to do its work
       and configuring "rsh" is beyond the scope  of  this  docu­
       ment,  but  its  a  standard  package and should be easily
       installed and activated on any Linux or BSD  distribution.

       Be aware that this is potentially a  security issue so use
       with care on machines that are  visible  to  outside  net­


       Quicktime files capturing using lavrec can be edited using
       Broadcast2000.   mjpeg  AVI  files  captured   using   the
       streamer  tool  from  the  xawtv package can be edited and
       compressed and played back using software.  Hardware play­
       back  is not possible for such files due to limitations in
       the Zoran hardware currently supported.

       MPEG files produced using the tools are know to play  back
       correctly on:
       dxr2 (hardware decoder card)
       mtv                 MPEG1 only
       gtv                 MPEG1 only
       MS Media player version 6 and 7
       SW DVD Player

       It  seems  that  the  MS Media player likes MPEG-1 streams
       more if you have used -f 1 when multiplexing.

              If  "1"  set,  display drop-frame timecode for NTSC
              input, instead of non-drop-frame timecode displayed
              in default.  This affects to glav, lavplay, lavrec,
              yuvplay, yuvkineco.


       If you have any problems or suggestions feel free to  mail
       me   (Bernhard   Praschinger):  waldviertler@users.source­

       There is a lot of stuff added from the HINTS which  Andrew
       Stevens (andrew.stevens@nexgo.de) created.

       And  there  a  some  people  that  helped  me with program
       descriptions and hints,

       If you have questions, remarks, problems or you just  want
       to  contact  the developers, the main mailing list for the
       MJPEG-tools is:

       Although little bits have been done by everyone  the  main
       work was roughly as follows:

       lav* : Ronald Bultje <rbultje@ronald.bitfreak.net>, Gernot
       Ziegler <gz@lysator.liu.se>
       mpeg2enc mplex bits-and-pieces : andrew.stevens@nexgo.de
       libmjpeg, libmovtar: Gernot Ziegler <gz@lysator.liu.se>

       Many thanks and  Kudos  to  Rainer  Johanni  the  original
       author  who started this all and did most of the hard work
       in the lavtools.


       The mjpeg homepage is at:

       vcdimager  is aviable at:

       cdrdao   is aviable at:

       Linux Video Studio is aviable at:

An undefined database error occurred. SELECT distinct pages.pagepath,pages.pageid FROM pages, page2command WHERE pages.pageid = page2command.pageid AND commandid =




Security Code
Security Code
Type Security Code

Don't have an account yet? You can create one. As a registered user you have some advantages like theme manager, comments configuration and post comments with your name.

Help if you can!

Amazon Wish List

Did You Know?
You can help in many different ways.


Tell a Friend About Us

Bookmark and Share

Web site powered by PHP-Nuke

Is this information useful? At the very least you can help by spreading the word to your favorite newsgroups, mailing lists and forums.
All logos and trademarks in this site are property of their respective owner. The comments are property of their posters. Articles are the property of their respective owners. Unless otherwise stated in the body of the article, article content (C) 1994-2013 by James Mohr. All rights reserved. The stylized page/paper, as well as the terms "The Linux Tutorial", "The Linux Server Tutorial", "The Linux Knowledge Base and Tutorial" and "The place where you learn Linux" are service marks of James Mohr. All rights reserved.
The Linux Knowledge Base and Tutorial may contain links to sites on the Internet, which are owned and operated by third parties. The Linux Tutorial is not responsible for the content of any such third-party site. By viewing/utilizing this web site, you have agreed to our disclaimer, terms of use and privacy policy. Use of automated download software ("harvesters") such as wget, httrack, etc. causes the site to quickly exceed its bandwidth limitation and are therefore expressly prohibited. For more details on this, take a look here

PHP-Nuke Copyright © 2004 by Francisco Burzi. This is free software, and you may redistribute it under the GPL. PHP-Nuke comes with absolutely no warranty, for details, see the license.
Page Generation: 0.20 Seconds