If you're transcoding, you should be using something suited for that purpose,
like ffmpeg.
Enclosed is a shell script that I use for transcoding arbitrary videos to DVD
format.
Feel free to change the framerate, frame size, and aspect ratio to any legal
values.
I use mpeg2enc only for creating
o big deal.
I tried very hard to document the bejeezus out of my code, including an
HTML-based implementation document, so hopefully it's not too hard for you to
figure out how to do what you want.
Steven Boswell
From: E Chalaron
To: Steven Boswell II ; M
ex);
thisline = threadline++;
pthread_mutex_unlock (&linemutex);
if (thisline < ptharg->height) {
startx = thisline * ptharg->width;
stopx = ptharg->width + startx;
for (x=startx; x < stopx; x++) {
filterpixel(ptharg->outBuf,ptharg->inBuf,x);
}
}
}
//mjpeg_debug("t
y4mdenoise does that sort of threading internally. It'll denoise the intensity
plane and color plane separately, plus it has reader threads and writer
threads. I wrote a small thread-related class hierarchy that should be
reusable. If nothing else, it should be inspirational.
BTW, you probab
--- On Fri, 6/26/09, Hervé wrote:
>if I well remember (I didn't verify with the lastest version),
>TOP_FORWARD only shift luma (chroma was let in state)
I finally had time to check this. I put a bunch of "yuvcorrect -T TOP_FORWARD
-T INTERLACED_TOP_FIRST" and "yuvcorrect -T BOTT_FORWARD -T
IN
>I've got to get out the door and don't have time to do much more
>than clear out the mailbox.
No problem...I couldn't get to this e-mail until after work anyway :-)
>Shift the video one line up or down within the frame. This is
>MUCH better (I think) than shifting the video 1/2 frame.
Thanks f
>yuvkineco is top field first only.
>[...]
>yuvcorrect top_forward is harmful.
Rats...my memory of the conversation was that TOP_FORWARD was
necessary to correct DV of telecined video. (I searched the
mailing list archives, by asking Google for "TOP_FORWARD
site:mail-archive.com", but couldn't fi
When I use my Canopus ADVC 300 to digitize a 24fps video that's been converted
to 30fps, I get a DV file that needs yuvkineco run on it in order to reverse
the telecine. Experience has shown me that I need to pipe the video through
"yuvcorrect -T TOP_FORWARD -T INTERLACED_TOP_FIRST" before send
6
SDL-devel-1.2.13-3.fc9.i386
Dunno if any of them are significant.
Steven Boswell
--- On Thu, 6/4/09, Hervé wrote:
Le 4 juin 09 à 21:26, Steven Boswell II a écrit :
> For those of you keeping up with latest
> CVS, check it out!
could give some tips?
I downloaded cvs, I made
cd …
.
Whoops! That was an integration error. I just checked in the fix. Sorry
about that!
For those of you that aren't subscribed to the mjpeg-developer list, I've been
doing heavy development of y4mdenoise lately, I've finally found a practical
implementation of my original idea, and it's faster
Granted, mplex shouldn't crash on this, but you do realize that you're
multiplexing two sound files together without any video file, right? Shouldn't
one of them be a video file?
--- On Fri, 5/15/09, Martin Leicht wrote:
$ mplex -f 8 maus.mp2 maus.mp2 -o film.mpg
-
>>Anyone know of a ready-to-use package for fedora containing smilutils
>>(smil2yuv, etc.)? I can't find one in the usual repositories, and
>>thought I'd ask before rolling my own rpm.
>
>You'll have to roll your own by checking out a copy from CVS.
If it helps, here's the .spec file I wrote a lo
>What I've got now is a dumb little program that
>searches for dark groups and changes them to
>white so they show up and you can see which areas
>would be filtered if the stub filter routines
>actually had some code written. Sort of a
>'yuvplay' that can be used to see what areas meet
>the 'dark'
>>What about having a "selective median" (for want
>>of a better term) type filter. What I'm
>>thinking is a median type filter that becomes
>>more aggressive as the overall luminance of the
>>frame / block becomes darker.
>
>I started on that, had a prototype program that
>displayed the areas tha
>This makes me wonder: There is a difference
>between a clean, high quality stream from a human
>perspective and from the perspective of an
>encoder (in the sense that is is 'easy' to
>encode), is there? If there is no difference,
>then there is no point in talking about video
>processing that imp
> I'm not quite sure though what is meant by
> "numerical conditioning".
> All processing on a computer is "numerical", isn't
> it? :-)
By numerical conditioning, I meant ensuring that the
parts of your video that LOOK the same are NUMERICALLY
the same, so that mpeg2enc's motion-detection will
>As a last step before the encoder perhaps
>'y4mdenoise -t 1' would be a good choice.
Actually, "y4mdenoise -z 1 -t 1"...the default value
of -z is 2, and "y4mdenoise -z 2 -t 1" will probably
generate nasty results.
Steven Boswell
ulatekh at yahoo dot com
_
>Is it always a Good Thing to use yuvdenoise as
>some sort of numerical conditioner before feeding
>the stream to mpeg2enc, no matter the image
>cleanness? Are there any cases where using
>'yuvdenoise -f' is not advisable?
A new yuvdenoise has been checked into CVS, and I
haven't examined it to s
>So, basically you are suggesting classical
>timeline editing with addition of hierarchial
>grouping of video fragments and transition
>effects?
I guess so. I don't know all the video-editing
terminology yet. :-)
Being able to treat video production as a bunch of
independent but combinable part
>I'm also developing an obsession for open source
>video processing. I develop scripts (and
>graphical wizards to control/run them, for
>inexprienced/lazy users) that push the limits of
>the current state of Linux video processing
>tools.
Ah, cool! That wasn't the direction I was
planning to tak
>>Clearly, kino needs to deal with other file
>>formats, such as lossless-compressed Quicktime
>>files.
>
>Ok so need to export in quicktime to feed into
>kino really.
Oh...does kino already support Quicktime? Do we have
a tool that'll export raw YUV video as
lossless-compressed Quicktime files?
Hello E.Chalaron,
I've just finished several improvements to the
"near-perfection" scripts I posted a few months
back, but haven't had time to write up my big
explanation. I hope to get to that soon.*
>I tend to do the following to get the reels under
>Kino :
>
>(find . -name \*.pnm | xargs cat
> Feed progressive stream to "mpeg2enc -p",
> so don't do 2nd "yuvcorrect -T
> INTERLACED_TOP_FIRST".
Ah, I didn't realize that. That should end up in the
man page, and as a warning inside mpeg2enc, probably.
> And, if the result of "yuvcorrect -T LINE_SWITCH"
> twice is correct,
> both LINE_SWI
> Ok, it realy must have something to do with
> multiplexing.
IIRC, you're using mjpegtools 1.6.2. I'm pretty sure
our DVD handling was still embryonic at that point.
You may want to ugrade to 1.6.3, the latest release.
(A lot of work is going on in mpeg2enc right now, so
I'm not sure if it's s
The enclosed script was used to process a Laserdisc of
a film. I even hand-edited the file generated by
yuvkineco to make sure it took apart the 3-2 pulldown
exactly. And the video looks absolutely
antiseptic...on my computer screen.
But if I burn a DVD of it and play it on the TV, there
is a ve
> Current yuvkineco handles top-field-first 4:2:0
> stream only.
> I'm planning support 4:4:4, 4:2:2, 4:1:1 streams,
> [...] I think I can hack so in
> not so long time. Please wait.
That would be great. Thank you!
One other request I thought of recently...now that we
have a top-notch de-interl
>>>If you have a progressive frame in 4:2:0, then
>>>the first chroma line is the average from lines
>>>1 and 2. The second chroma line is the average
>>>of 3 and 4.
>
>Right - for 4:2:0. The "average from lines 1 and
>2" and 'lines 3 and 4' are the ":0" of 4:2:0.
>4:1:1 is not subsampled vertica
>>Some time ago, there was a discussion on 4:1:1
>>chroma subsampling in DV files of 3-2-pulldown
>>sources, and how the color needed a special
>>line-switch in order to be completely accurate.
>>(Lines 2 and 3 of every group of 4 lines have to
>>be switched, IIRC.)
>
>Can you refresh my (our) memo
I've been experimenting with yuvkineco for a few
weeks. The only tool that's done more than yuvkineco
to increase my video quality is y4mdenoise! Stripping
out 20% of the frames in a 3-2-pulled-down stream, and
allowing the remaining frames to take up 20% more
space, has been a big win! And thos
This was originally posted to mjpeg-developer, but
I figured most users would be interested in this.
>What are normal values for not-moved, moved, and
>new?
It depends entirely on your video. There are no
normal values. In general, one should watch the
numbers coming from y4mdenoise, and then l
>If you have a progressive frame in 4:2:0, then
>the first chroma line is the average from lines 1
>and 2. The second chroma line is the average of
>3 and 4.
>
>If you have an interlaced frame, the first chroma
>line is the average of the first two lines of the
>first field. With the fields inter
>o With 4:2:0 material, you can't just re-label a
>stream from "top-field-first" to "progressive" or
>vice-versa: that will screw up the chroma planes.
Eh? It was my understanding that the lines of the
top field were stored at even y indices, and the
lines of the bottom field were stored at odd y
>>LOL! Worst file name ever! :) Do you have a
>>"filt" tool to demangle that?
>
>Actually, it's quite apparrent what it means
>after looking at it for a couple seconds:
Also, since my filename is meant to represent the
processing pipeline, and said pipeline is described
earlier in the same file,
>>It's my understanding that progressive-frame
>>DVDs can't be played by all DVD players.
>
>You can feed mpeg2enc a progressive stream and it
>will do The Right Thing. Basically the
>progressive frame gets split into two "fields"
>and the flag in the MPEG2 header turned on that
>says both fields
>I'm still not "sold" completely on deinterlacing
Actually, you're right, deinterlacing isn't needed
on film sources. But it's doing great on a
pee-wee soccer-league game that I shot on VHS-C.
>If yuvcorrect has been called to convert bottom
>to top first then why "pipe it through another
>yuvco
>>before the video gets sent to mpeg2enc, I pipe
>>it through another yuvcorrect that changes the
>>stream header back to top-field-interlaced.
>
>What's wrong with feeding mpeg2enc the stream of
>progressive frames? I would think that is
>exactly what you want to do, with 24fps film
>material.
I
Attached to this letter is the shell script I used
recently to convert a videotape that was made of a
24-frame-per-second film. It's almost identical to
the "near-perfection" scripts I posted earlier, but it
contains a call to yuvkineco.
If you know your videotape is of a film, and you're
not run
--- John Gay <[EMAIL PROTECTED]> wrote:
> Very nice discussion! This is the type of info I'm
> here for!
>
> I'll have to start keeping copies of these very good
> technical discussions
> regarding the various mjpeg-tools and their many
> settings.
We really oughta put this information into the
With the recent mpeg2enc bug fixes, I've finally
created artifact-free DVD video from VHS videotapes.
The last 2 artifacts I noticed have gone away. Happy
day!
I think my next step is to learn how to use yuvkineco,
to make my movie denoising/encoding more efficient. I
searched the mjpeg-users a
My last set of changes broke the -B option! And I
fixed a bunch of other bugs I found, most of which
would have caused y4mdenoise to crash upon exit. Get
the latest CVS and the problems should be fixed.
Steven Boswell
ulatekh at yahoo dot com
_
>>it's nice to see the main thread (the one
>>denoising intensity) running near 100% CPU
>>usage.
>
>That's how y4mdenoise has always behaved -
>but now it's using part of the 2nd cpu as well :)
But how much better is it filling up both CPUs?
Do you have any sense of how much less overall
idle ti
I don't know when we developers will officially
release a new version of mjpegtools -- there's
currently a coding frenzy around mpeg2enc and the
denoisers. So you'll still have to get the latest CVS
version in order to use y4mdenoise. But if you do, I
just finished writing the first dual-processo
>>So I say use -H!
>
>Or as I do, combine the hi-res tables and the
>tmpgenc tables - basically use the Intra portion
>of the 'hi' and the nonIntra of the tmpgenc.
>"The best of both worlds" so to speak.
I'm pretty sure I tried that, and gained back some
artifacts that I had previously removed. (
>I do hope you weren't offended that I said it was
>slow :-)
Heh, no problem -- after all, y4mdenoise is a lot
slower than I wanted it to be. But outside of
writing a multi-processor version, I'm currently
not sure what else I can do to speed it up.
Until I know, I have to expend political capita
Oops, I forgot to discuss the non-denoising-
related aspects of the way I use mpeg2enc! :-)
The first mpeg2enc in the script file generates a
DVD. "-b 9300" is the highest I go in practice;
that allows for 384 kbps audio and (my estimate)
120 kbps for the information mplex adds, staying
under th
--- sean <[EMAIL PROTECTED]> wrote:
>I'm also trying to convert a series of old vhs
>family tapes. I'm using a borrowed canopus advc
>300. I'm about 103 artifacts to go before
>perfection. Could you post how you are
>converting your tapes? The more specific the
>better - i.e. actual command li
--- Ray Cole <[EMAIL PROTECTED]> wrote:
> I originally built
> from CVS wanting to try y4mdenoise, but it is just
> too slow for me to use.
yuvdenoise and y4mdenoise, completely separate from
algorithmic differences, have one very important
difference right off the bat. yuvdenoise analyzes the
ne
> I tried to build the mjpegtools from CVS.
> Unfortunately, I could not run
> autogen.sh successfully. The most important error:
>
> HAVE_PNG not defined in AM_CONDITIONAL
>
> What is going wrong? I have installed libpng-devel.
I, too, have a heck of a time building from CVS, and
I'm supposed
"Steven M. Schultz" <[EMAIL PROTECTED]> wrote:
On clean material I didn't see theneed for the more aggressive value. For DV sources (from a Digital8 camcoder) '-l 1 -t 6' is good, for captures from a goodsource (laserdisc) '-l 2 -t 6'. The really bad sources such as VHSget '-l 3 -t 4' (VHS is so l
In a previous conversation on this mailing list, I was advised against using --keep-hf with analog source material. However, my (informal) tests seemed to indicate that --keep-hf did pretty good with denoised (i.e. yuvdenoise/yuvmedianfilter) analog source material. I wasn't sure if it was just m
It's my understanding that sharp transitions between light & dark areas are one of the hardest things for MPEG to encode accurately. MPEG is designed for "natural" images (i.e. stuff recorded from real-world sources).
Sounds like you had some success getting rid of it with yuvmedianfilter. Keep u
Hello,
Did anyone ever write you back about this? I couldn't find a reply on the mailing list. I wish I could help you, but all I can say is, I see this frequently on my digital cable TV. It may be a known hard problem with MPEG encoding in general. I'd like to know what to do to solve it too.
You may have the same problem, though8000 kbps video + 256 kbps audio > 8192 kbps already, even without the extra space mplex needs.
For all the fun that DVDs promise, I have to say, I've found that for most things I'm happy with VCDs. (Not even SVCDsjust normal old VCDs.) They take 3 hour
It's my understanding that the total bitrate for DVD-quality audio/video must be between 2 Mbps and 8 Mbps. I tend to encode my DVD video at 7500 kbps and the audio at 384 kbps, so with the extra space taken by multiplexing, I tend to get under the limit.
The errors you're getting are because ther
Is that with or without my one-line patch? Just making surebecause adding "if (denoisier.sharpen == 0) return;" to the beginning of sharpen_frame() was what it took to speed things up for me. (Just wanted to verify my observation was valid :-)
Steven Boswell, [EMAIL PROTECTED]
"Steven M. Sc
I don't think it's been ported to Windows. You're free to be the one who does it, though :-)
Steven Boswell, [EMAIL PROTECTED]
natarajan thirunavukkarasu <[EMAIL PROTECTED]> wrote:
I want to build MJPEG encoder & decoder in VC++environment. Where can i get source codes? What is theprocedure to
I produce a lot of VCDs. I haven't started doing SVCDs yet, because no one I know has a DVD player that can handle them. (Gotta make what works across most peoples' equipment, right? :-)
I reduce my audio/video bitrates in order to write longer movies onto VCDs. I'll put up to 100 minutes or so o
That's because yuv2lav is not multi-threaded -- it reads from the YUV stream, compresses the image, and writes it to the output file, all in one thread. Fixing that is one of my near-future planned projects (sometime after finding a new place to live & getting a day job :-).
Steven Boswell, [EMAIL
58 matches
Mail list logo