live555 androidsdpline代表什么意思

下次自动登录
现在的位置:
& 综合 & 正文
live555学习3(转)
九 h264 RTP传输详解(1)
前几章对Server端的介绍中有个比较重要的问题没有仔细探究:如何打开文件并获得其SDP信息。我们就从这里入手吧。
当RTSPServer收到对某个媒体的DESCRIBE请求时,它会找到对应的ServerMediaSession,调用ServerMediaSession::generateSDPDescription()。generateSDPDescription()中会遍历调用ServerMediaSession中所有的调用ServerMediaSubsession,通过subsession-&sdpLines()取得每个Subsession的sdp,合并成一个完整的SDP返回之。
我们几乎可以断定,文件的打开和分析应该是在每个Subsession的sdpLines()函数中完成的,看看这个函数:
char const* OnDemandServerMediaSubsession::sdpLines()
if (fSDPLines == NULL) {
unsigned estB
FramedSource* inputSource = createNewStreamSource(0, estBitrate);
if (inputSource == NULL)
return NULL;
struct in_addr dummyA
dummyAddr.s_addr = 0;
Groupsock dummyGroupsock(envir(), dummyAddr, 0, 0);
unsigned char rtpPayloadType = 96 + trackNumber() - 1;
RTPSink* dummyRTPSink = createNewRTPSink(&dummyGroupsock,
rtpPayloadType, inputSource);
setSDPLinesFromRTPSink(dummyRTPSink, inputSource, estBitrate);
Medium::close(dummyRTPSink);
closeStreamSource(inputSource);
return fSDPL
char const* OnDemandServerMediaSubsession::sdpLines()
if (fSDPLines == NULL) {
// We need to construct a set of SDP lines that describe this
// subsession (as a unicast stream).
To do so, we first create
// dummy (unused) source and "RTPSink" objects,
// whose parameters we use for the SDP lines:
unsigned estB
FramedSource* inputSource = createNewStreamSource(0, estBitrate);
if (inputSource == NULL)
return NULL; // file not found
struct in_addr dummyA
dummyAddr.s_addr = 0;
Groupsock dummyGroupsock(envir(), dummyAddr, 0, 0);
unsigned char rtpPayloadType = 96 + trackNumber() - 1; // if dynamic
RTPSink* dummyRTPSink = createNewRTPSink(&dummyGroupsock,
rtpPayloadType, inputSource);
setSDPLinesFromRTPSink(dummyRTPSink, inputSource, estBitrate);
Medium::close(dummyRTPSink);
closeStreamSource(inputSource);
return fSDPL
其所为如是:Subsession中直接保存了对应媒体文件的SDP,但是在第一次获取时fSDPLines为NULL,所以需先获取fSDPLines。其做法比较费事,竟然是建了临时的Source和RTPSink,把它们连接成一个StreamToken,Playing一段时间之后才取得了fSDPLines。createNewStreamSource()和createNewRTPSink()都是虚函数,所以此处创建的source和sink都是继承类指定的,我们分析的是H264,也就是H264VideoFileServerMediaSubsession所指定的,来看一下这两个函数:
FramedSource* H264VideoFileServerMediaSubsession::createNewStreamSource(
unsigned ,
unsigned& estBitrate)
estBitrate = 500;
ByteStreamFileSource* fileSource = ByteStreamFileSource::createNew(envir(),
fFileName);
if (fileSource == NULL)
return NULL;
fFileSize = fileSource-&fileSize();
return H264VideoStreamFramer::createNew(envir(), fileSource);
RTPSink* H264VideoFileServerMediaSubsession::createNewRTPSink(
Groupsock* rtpGroupsock,
unsigned char rtpPayloadTypeIfDynamic,
FramedSource* )
return H264VideoRTPSink::createNew(envir(), rtpGroupsock,
rtpPayloadTypeIfDynamic);
FramedSource* H264VideoFileServerMediaSubsession::createNewStreamSource(
unsigned /*clientSessionId*/,
unsigned& estBitrate)
estBitrate = 500; // kbps, estimate
// Create the video source:
ByteStreamFileSource* fileSource = ByteStreamFileSource::createNew(envir(),
fFileName);
if (fileSource == NULL)
return NULL;
fFileSize = fileSource-&fileSize();
// Create a framer for the Video Elementary Stream:
return H264VideoStreamFramer::createNew(envir(), fileSource);
RTPSink* H264VideoFileServerMediaSubsession::createNewRTPSink(
Groupsock* rtpGroupsock,
unsigned char rtpPayloadTypeIfDynamic,
FramedSource* /*inputSource*/)
return H264VideoRTPSink::createNew(envir(), rtpGroupsock,
rtpPayloadTypeIfDynamic);
可以看到,分别创建了H264VideoStreamFramer和H264VideoRTPSink。可以肯定H264VideoStreamFramer也是一个Source,但它内部又利用了另一个source--ByteStreamFileSource。后面会分析为什么要这样做,这里先不要管它。还没有看到真正打开文件的,继续探索:
void OnDemandServerMediaSubsession::setSDPLinesFromRTPSink(
RTPSink* rtpSink,
FramedSource* inputSource,
unsigned estBitrate)
if (rtpSink == NULL)
char const* mediaType = rtpSink-&sdpMediaType();
unsigned char rtpPayloadType = rtpSink-&rtpPayloadType();
struct in_addr serverAddrForSDP;
serverAddrForSDP.s_addr = fServerAddressForSDP;
char* const ipAddressStr = strDup(our_inet_ntoa(serverAddrForSDP));
char* rtpmapLine = rtpSink-&rtpmapLine();
char const* rangeLine = rangeSDPLine();
char const* auxSDPLine = getAuxSDPLine(rtpSink, inputSource);
if (auxSDPLine == NULL)
auxSDPLine = "";
char const* const sdpFmt = "m=%s %u RTP/AVP %d\r\n"
"c=IN IP4 %s\r\n"
"b=AS:%u\r\n"
"a=control:%s\r\n";
unsigned sdpFmtSize = strlen(sdpFmt) + strlen(mediaType) + 5
+ strlen(ipAddressStr) + 20
+ strlen(rtpmapLine) + strlen(rangeLine) + strlen(auxSDPLine)
+ strlen(trackId());
char* sdpLines = new char[sdpFmtSize];
sprintf(sdpLines, sdpFmt, mediaType,
fPortNumForSDP,
rtpPayloadType,
ipAddressStr,
estBitrate,
rtpmapLine,
rangeLine,
auxSDPLine,
trackId());
delete[] (char*) rangeL
delete[] rtpmapL
delete[] ipAddressS
fSDPLines = strDup(sdpLines);
delete[] sdpL
void OnDemandServerMediaSubsession::setSDPLinesFromRTPSink(
RTPSink* rtpSink,
FramedSource* inputSource,
unsigned estBitrate)
if (rtpSink == NULL)
char const* mediaType = rtpSink-&sdpMediaType();
unsigned char rtpPayloadType = rtpSink-&rtpPayloadType();
struct in_addr serverAddrForSDP;
serverAddrForSDP.s_addr = fServerAddressForSDP;
char* const ipAddressStr = strDup(our_inet_ntoa(serverAddrForSDP));
char* rtpmapLine = rtpSink-&rtpmapLine();
char const* rangeLine = rangeSDPLine();
char const* auxSDPLine = getAuxSDPLine(rtpSink, inputSource);
if (auxSDPLine == NULL)
auxSDPLine = "";
char const* const sdpFmt = "m=%s %u RTP/AVP %d\r\n"
"c=IN IP4 %s\r\n"
"b=AS:%u\r\n"
"a=control:%s\r\n";
unsigned sdpFmtSize = strlen(sdpFmt) + strlen(mediaType) + 5 /* max short len */
+ 3 /* max char len */
+ strlen(ipAddressStr) + 20 /* max int len */
+ strlen(rtpmapLine) + strlen(rangeLine) + strlen(auxSDPLine)
+ strlen(trackId());
char* sdpLines = new char[sdpFmtSize];
sprintf(sdpLines, sdpFmt, mediaType, // m= &media&
fPortNumForSDP, // m= &port&
rtpPayloadType, // m= &fmt list&
ipAddressStr, // c= address
estBitrate, // b=AS:&bandwidth&
rtpmapLine, // a=rtpmap:... (if present)
rangeLine, // a=range:... (if present)
auxSDPLine, // optional extra SDP line
trackId()); // a=control:&track-id&
delete[] (char*) rangeL
delete[] rtpmapL
delete[] ipAddressS
fSDPLines = strDup(sdpLines);
delete[] sdpL
此函数中取得Subsession的sdp并保存到fSDPLines。打开文件应在rtpSink-&rtpmapLine()甚至是Source创建时已经做了。我们不防先把它放一放,而是先把SDP的获取过程搞个通透。所以把焦点集中到getAuxSDPLine()上。
char const* OnDemandServerMediaSubsession::getAuxSDPLine(
RTPSink* rtpSink,
FramedSource* )
return rtpSink == NULL ? NULL : rtpSink-&auxSDPLine();
char const* OnDemandServerMediaSubsession::getAuxSDPLine(
RTPSink* rtpSink,
FramedSource* /*inputSource*/)
// Default implementation:
return rtpSink == NULL ? NULL : rtpSink-&auxSDPLine();
很简单,调用了rtpSink-&auxSDPLine()那么我们要看H264VideoRTPSink::auxSDPLine():不用看了,很简单,取得source 中保存的PPS,SPS等形成a=fmpt行。但事实上并没有这么简单,H264VideoFileServerMediaSubsession重写了getAuxSDPLine()!如果不重写,则说明auxSDPLine已经在前面分析文件时获得了,那么既然重写,就说明前面没有获取到,只能在这个函数中重写。look H264VideoFileServerMediaSubsession中这个函数:
char const* H264VideoFileServerMediaSubsession::getAuxSDPLine(
RTPSink* rtpSink,
FramedSource* inputSource)
if (fAuxSDPLine != NULL)
return fAuxSDPL
if (fDummyRTPSink == NULL) {
fDummyRTPSink = rtpS
fDummyRTPSink-&startPlaying(*inputSource, afterPlayingDummy, this);
checkForAuxSDPLine(this);
envir().taskScheduler().doEventLoop(&fDoneFlag);
return fAuxSDPL
char const* H264VideoFileServerMediaSubsession::getAuxSDPLine(
RTPSink* rtpSink,
FramedSource* inputSource)
if (fAuxSDPLine != NULL)
return fAuxSDPL // it's already been set up (for a previous client)
if (fDummyRTPSink == NULL) { // we're not already setting it up for another, concurrent stream
// Note: For H264 video files, the 'config' information ("profile-level-id" and "sprop-parameter-sets") isn't known
// until we start reading the file.
This means that "rtpSink"s "auxSDPLine()" will be NULL initially,
// and we need to start reading data from our file until this changes.
fDummyRTPSink = rtpS
// Start reading the file:
fDummyRTPSink-&startPlaying(*inputSource, afterPlayingDummy, this);
// Check whether the sink's 'auxSDPLine()' is ready:
checkForAuxSDPLine(this);
envir().taskScheduler().doEventLoop(&fDoneFlag);
return fAuxSDPL
注释里面解释得很清楚,H264不能在文件头中取得PPS/SPS,必须在播放一下后(当然,它是一个原始流文件,没有文件头)才行。也就是说不能从rtpSink中取得了。为了保证在函数退出前能取得AuxSDP,把大循环搬到这里来了。afterPlayingDummy()是在播放结束也就是取得aux sdp之后执行。在大循环之前的checkForAuxSDPLine()做了什么呢?
void H264VideoFileServerMediaSubsession::checkForAuxSDPLine1()
char const*
if (fAuxSDPLine != NULL) {
setDoneFlag();
} else if (fDummyRTPSink != NULL
&& (dasl = fDummyRTPSink-&auxSDPLine()) != NULL) {
fAuxSDPLine = strDup(dasl);
fDummyRTPSink = NULL;
setDoneFlag();
int uSecsToDelay = 100000;
nextTask() = envir().taskScheduler().scheduleDelayedTask(uSecsToDelay,
(TaskFunc*) checkForAuxSDPLine, this);
void H264VideoFileServerMediaSubsession::checkForAuxSDPLine1()
char const*
if (fAuxSDPLine != NULL) {
// Signal the event loop that we're done:
setDoneFlag();
} else if (fDummyRTPSink != NULL
&& (dasl = fDummyRTPSink-&auxSDPLine()) != NULL) {
fAuxSDPLine = strDup(dasl);
fDummyRTPSink = NULL;
// Signal the event loop that we're done:
setDoneFlag();
// try again after a brief delay:
int uSecsToDelay = 100000; // 100 ms
nextTask() = envir().taskScheduler().scheduleDelayedTask(uSecsToDelay,
(TaskFunc*) checkForAuxSDPLine, this);
它检查是否已取得Aux sdp,如果取得了,设置结束标志,直接返回。如果没有,就检查是否sink中已取得了aux sdp,如果是,也设置结束标志,返回。如果还没有取得,则把这个检查函数做为delay task加入计划任务中。每100毫秒检查一次,每检查一次主要就是调用一次fDummyRTPSink-&auxSDPLine()。大循环在检测到fDoneFlag改变时停止,此时已取得了aux sdp。但是如果直到文件结束也没有得到aux sdp,则afterPlayingDummy()被执行,在其中停止掉这个大循环。然后在父Subsession类中关掉这些临时的source和sink。在直正播放时重新创建。
十 h264 RTP传输详解(2)
上一章并没有把打开文件分析文件的代码找到,因为发现它隐藏得比较深,而且H264的Source又有多个,形成了连环计。所以此章中就将文件处理与H264的Source们并在一起分析吧。
从哪里开始呢?从source开始吧!为什么要从它开始呢?我就想从这里开始,行了吧?
FramedSource* H264VideoFileServerMediaSubsession::createNewStreamSource(
unsigned ,
unsigned& estBitrate)
estBitrate = 500;
ByteStreamFileSource* fileSource = ByteStreamFileSource::createNew(envir(),
fFileName);
if (fileSource == NULL)
return NULL;
fFileSize = fileSource-&fileSize();
return H264VideoStreamFramer::createNew(envir(), fileSource);
FramedSource* H264VideoFileServerMediaSubsession::createNewStreamSource(
unsigned /*clientSessionId*/,
unsigned& estBitrate)
estBitrate = 500; // kbps, estimate
// Create the video source:
ByteStreamFileSource* fileSource = ByteStreamFileSource::createNew(envir(),
fFileName);
if (fileSource == NULL)
return NULL;
fFileSize = fileSource-&fileSize();
// Create a framer for the Video Elementary Stream:
return H264VideoStreamFramer::createNew(envir(), fileSource);
先创建一个ByteStreamFileSource,显然这是一个从文件按字节读取数据的source,没什么可细说的。但是,打开文件,读写文件操作的确就在其中。最终来处理h264文件,分析其格式,解析出帧或nal的应是这个source: H264VideoStreamFramer。打开文件的地方找到了,但分析文件的代码才是更有价值的。那我们只能来看H264VideoStreamFramer。
H264VideoStreamFramer继承自MPEGVideoStreamFramer,MPEGVideoStreamFramer继承自FramedFilter,FramedFilter继承自FramedSource。
啊,中间又冒出个Filter。看到它,是不是联想到了DirectShow的filter?或者说Photoshop中的filter?它们的意义应该都差不多吧?即插入到source和render(sink)之间的处理媒体数据的东东?如果这样理解,还是更接近于photoshop中的概念。唉,说实话,我估计自己说的也不全对,反正就这样认识吧,谬不了一千里。既然我们这样认识了,那么我们就有理由相信可能会出现多个filter们一个连一个,然后高唱:手牵着脚脚牵着手一起向前走...
H264VideoStreamFramer继承自MPEGVideoStreamFramer,MPEGVideoStreamFramer比较简单,它只是把一些工作交给了MPEGVideoStreamParser(又出来个parser,这可是个新东西哦,先不要管它吧),重点来看一下。
构造函数:
H264VideoStreamFramer::H264VideoStreamFramer(UsageEnvironment& env,
FramedSource* inputSource,
Boolean createParser,
Boolean includeStartCodeInOutput)
: MPEGVideoStreamFramer(env, inputSource),
fIncludeStartCodeInOutput(includeStartCodeInOutput),
fLastSeenSPS(NULL),
fLastSeenSPSSize(0),
fLastSeenPPS(NULL),
fLastSeenPPSSize(0)
fParser = createParser ?
new H264VideoStreamParser(this, inputSource,
includeStartCodeInOutput) : NULL;
fNextPresentationTime = fPresentationTimeB
fFrameRate = 25.0;
H264VideoStreamFramer::H264VideoStreamFramer(UsageEnvironment& env,
FramedSource* inputSource,
Boolean createParser,
Boolean includeStartCodeInOutput)
: MPEGVideoStreamFramer(env, inputSource),
fIncludeStartCodeInOutput(includeStartCodeInOutput),
fLastSeenSPS(NULL),
fLastSeenSPSSize(0),
fLastSeenPPS(NULL),
fLastSeenPPSSize(0)
fParser = createParser ?
new H264VideoStreamParser(this, inputSource,
includeStartCodeInOutput) : NULL;
fNextPresentationTime = fPresentationTimeB
fFrameRate = 25.0; // We assume a frame rate of 25 fps,
//unless we learn otherwise (from parsing a Sequence Parameter Set NAL unit)
由于createParser肯定为真,所以主要内容是创建了H264VideoStreamParser对象(先不管这个parser)。
其它的函数就没什么可看的了,都集中在所保存的PPS与SPS上。看来分析工作移到了H264VideoStreamParser,Parser嘛,就是分析器。分析器的基类是StreamParser。StreamParser做了不少的工作,那我们就先搞明白StreamParser做了哪些工作吧,并且可能为继承者提供什么样的调用框架呢?.....我看完了,呵呵。直接说分析结果吧:
StreamParser的主要工作是实现了对数据以位为单位进行访问。因为在处理媒体格式时,按位分析是很常见的情况。这两个函数skipBits(unsigned numBits)和unsigned getBits(unsigned numBits)很明显是基于位的操作。unsigned char* fBank[2]这个变量中的两个缓冲区被轮换使用。这个类中保存了一个Source,理所当然地它应该保存ByteStreamFileSource的实例,而不是FramedFilter的。那些getBytes()或getBits()最终会导致读文件的操作。从文件读取一次数据后,StreamParser::afterGettingBytes1()被调用,StreamParser::afterGettingBytes1()中做一点简单的工作后便调用fClientContinueFunc这个回调函数。fClientContinueFunc可能指向Frame的函数体也可能是指向RtpSink的函数体--因为Framer完全可以把RtpSink的函数体传给Parser。至于到底指向哪个,只能在进一步分析之后才得知。
下面再来分析StreamParser的儿子:MPEGVideoStreamParser。
MPEGVideoStreamParser::MPEGVideoStreamParser(
MPEGVideoStreamFramer* usingSource,
FramedSource* inputSource)
: StreamParser(inputSource,
FramedSource::handleClosure,
usingSource,
&MPEGVideoStreamFramer::continueReadProcessing,
usingSource),
fUsingSource(usingSource)
MPEGVideoStreamParser::MPEGVideoStreamParser(
MPEGVideoStreamFramer* usingSource,
FramedSource* inputSource)
: StreamParser(inputSource,
FramedSource::handleClosure,
usingSource,
&MPEGVideoStreamFramer::continueReadProcessing,
usingSource),
fUsingSource(usingSource)
MPEGVideoStreamParser的构造函数中有很多有意思的东西。
首先参数usingSource是什么意思?表示正在使用这个Parser的Source? inputSource 很明确,就是能获取数据的source,也就是 ByteStreamFileSource。而且很明显的,StreamParser中保存的source是ByteStreamFileSource。从传入给StreamParser的回调函数以及它们的参数可以看出,这些回调函数全是指向的StreamParser的子类的函数(为啥不用虚函数的方式?哦,回调函数全是静态函数,不能成为虚函数)。这说明在每读一次数据后,MPEGVideoStreamFramer::continueReadProcessing()被调用,在其中对帧进行界定和分析,完成后再调用RTPSink的相应函数,RTPSink中对帧进行打包和发送(还记得吗,不记得了请回头看以前的章节)。
MPEGVideoStreamParser的fTo是RTPSink传入的缓冲指针,其saveByte(),save4Bytes()是把数据从StreamParser的缓冲把数据复制到fTo中,是给继承类使用的。saveToNextCode()是复制数据直到遇到一个同步字节串(比如h264中分隔nal的那一陀东东,当然此处的跟h264还不一样),也是给继承类使用的。纯虚函数parse()很明显是留继承类去写帧分析代码的地方。registerReadInterest()被调用者用来告诉MPEGVideoStreamParser其接收帧的缓冲地址与容量。
现在应该来分析一下MPEGVideoStreamFramer,以明确MPEGVideoStreamFramer与MPEGVideoStreamParser是怎样配合的。
MPEGVideoStreamFramer中用到Parser的重要的函数只有两个,一是:
void MPEGVideoStreamFramer::doGetNextFrame()
fParser-&registerReadInterest(fTo, fMaxSize);
continueReadProcessing();
void MPEGVideoStreamFramer::doGetNextFrame()
fParser-&registerReadInterest(fTo, fMaxSize);
continueReadProcessing();
很简单,只是告诉了Parser保存帧的缓冲和缓冲的大小,然后执行continueReadProcessing(),那么来看一下continueReadProcessing():
void MPEGVideoStreamFramer::continueReadProcessing()
unsigned acquiredFrameSize = fParser-&parse();
if (acquiredFrameSize & 0) {
fFrameSize = acquiredFrameS
fNumTruncatedBytes = fParser-&numTruncatedBytes();
fDurationInMicroseconds =
(fFrameRate == 0.0 || ((int) fPictureCount) & 0) ?
0 : (unsigned) ((fPictureCount * 1000000) / fFrameRate);
fPictureCount = 0;
afterGetting(this);
void MPEGVideoStreamFramer::continueReadProcessing()
unsigned acquiredFrameSize = fParser-&parse();
if (acquiredFrameSize & 0) {
// We were able to acquire a frame from the input.
// It has already been copied to the reader's space.
fFrameSize = acquiredFrameS
fNumTruncatedBytes = fParser-&numTruncatedBytes();
// "fPresentationTime" should have already been computed.
// Compute "fDurationInMicroseconds" now:
fDurationInMicroseconds =
(fFrameRate == 0.0 || ((int) fPictureCount) & 0) ?
0 : (unsigned) ((fPictureCount * 1000000) / fFrameRate);
fPictureCount = 0;
// Call our own 'after getting' function.
Because we're not a 'leaf'
// source, we can call this directly, without risking infinite recursion.
afterGetting(this);
// We were unable to parse a complete frame from the input, because:
// - we had to read more data from the source stream, or
// - the source stream has ended.
先利用Parser进行分析(应该是解析出一帧吧?),分析完成后,帧数据已到了MPEGVideoStreamFramer的缓冲fTo中。计算出帧的持续时间后,调用FrameSource的afterGetting(),最终会调用RTPSink的函数。
看到这里,可以总结一下,其实看来看去,Parser直正被外部使用的函数几乎只有一个:parse()。
下面可以看H264VideoStreamParser了。其实也很简单,多了一些用于分析h264格式的函数,当然是非公开的,只供自己使用的。在哪里使用呢?当然是在parser()中使用。至于H264VideoStreamFramer前面已经说过了,没什么太多的东西,所以就不看了。总结起来也就是这样:RTPSink向H264VideoStreamFramer要下一帧(其实h264中肯定不是一帧了,而是一个NAL Unit),H264VideoStreamFramer告诉H264VideoStreamParser输出缓冲和容内的字节数,然后调用H264VideoStreamParser的parser()函数,parser()中调用ByteStreamFileSource从文件中读取数据,直到parser()获得完整的一帧,parser()返回,H264VideoStreamFramer进行一些自己的处理后把这一帧返回给了RTPSink(当然是以回调函数的方式返回)。
还有一个东西,H264FUAFragmenter,被H264VideoRTPSink所使用,继承自FramedFilter。它最初在RTPSink开始播放后创建,如下:
Boolean H264VideoRTPSink::continuePlaying()
if (fOurFragmenter == NULL) {
fOurFragmenter = new H264FUAFragmenter(envir(), fSource,
OutPacketBuffer::maxSize,
ourMaxPacketSize() - 12);
fSource = fOurF
return MultiFramedRTPSink::continuePlaying();
Boolean H264VideoRTPSink::continuePlaying()
// First, check whether we have a 'fragmenter' class set up yet.
// If not, create it now:
if (fOurFragmenter == NULL) {
fOurFragmenter = new H264FUAFragmenter(envir(), fSource,
OutPacketBuffer::maxSize,
ourMaxPacketSize() - 12/*RTP hdr size*/);
fSource = fOurF
// Then call the parent class's implementation:
return MultiFramedRTPSink::continuePlaying();
并且它取代了H264VideoStreamFramer成为直接与RTPSink发生关系的source.如此一来,RTPSink要获取帧时,都是从它获取的.看它最主要的一个函数吧:
void H264FUAFragmenter::doGetNextFrame()
if (fNumValidDataBytes == 1) {
fInputSource-&getNextFrame(&fInputBuffer[1], fInputBufferSize - 1,
afterGettingFrame, this, FramedSource::handleClosure, this);
if (fMaxSize & fMaxOutputPacketSize) {
envir() && "H264FUAFragmenter::doGetNextFrame(): fMaxSize ("
&& fMaxSize && ") is smaller than expected\n";
fMaxSize = fMaxOutputPacketS
fLastFragmentCompletedNALUnit = T
if (fCurDataOffset == 1) {
if (fNumValidDataBytes - 1 &= fMaxSize) {
memmove(fTo, &fInputBuffer[1], fNumValidDataBytes - 1);
fFrameSize = fNumValidDataBytes - 1;
fCurDataOffset = fNumValidDataB
fInputBuffer[0] = (fInputBuffer[1] & 0xE0) | 28;
fInputBuffer[1] = 0x80 | (fInputBuffer[1] & 0x1F);
memmove(fTo, fInputBuffer, fMaxSize);
fFrameSize = fMaxS
fCurDataOffset += fMaxSize - 1;
fLastFragmentCompletedNALUnit = F
fInputBuffer[fCurDataOffset - 2] = fInputBuffer[0];
fInputBuffer[fCurDataOffset - 1] = fInputBuffer[1] & ~0x80;
unsigned numBytesToSend = 2 + fNumValidDataBytes - fCurDataO
if (numBytesToSend & fMaxSize) {
numBytesToSend = fMaxS
fLastFragmentCompletedNALUnit = F
fInputBuffer[fCurDataOffset - 1] |= 0x40;
fNumTruncatedBytes = fSaveNumTruncatedB
memmove(fTo, &fInputBuffer[fCurDataOffset - 2], numBytesToSend);
fFrameSize = numBytesToS
fCurDataOffset += numBytesToSend - 2;
if (fCurDataOffset &= fNumValidDataBytes) {
fNumValidDataBytes = fCurDataOffset = 1;
FramedSource::afterGetting(this);
void H264FUAFragmenter::doGetNextFrame()
if (fNumValidDataBytes == 1) {
// We have no NAL unit data currently in the buffer.
Read a new one:
fInputSource-&getNextFrame(&fInputBuffer[1], fInputBufferSize - 1,
afterGettingFrame, this, FramedSource::handleClosure, this);
// We have NAL unit data in the buffer.
There are three cases to consider:
// 1. There is a new NAL unit in the buffer, and it's small enough to deliver
to the RTP sink (as is).
// 2. There is a new NAL unit in the buffer, but it's too large to deliver to
the RTP sink in its entirety.
Deliver the first fragment of this data,
as a FU-A packet, with one extra preceding header byte.
// 3. There is a NAL unit in the buffer, and we've already delivered some
fragment(s) of this.
Deliver the next fragment of this data,
as a FU-A packet, with two extra preceding header bytes.
if (fMaxSize & fMaxOutputPacketSize) { // shouldn't happen
envir() && "H264FUAFragmenter::doGetNextFrame(): fMaxSize ("
&& fMaxSize && ") is smaller than expected\n";
fMaxSize = fMaxOutputPacketS
fLastFragmentCompletedNALUnit = T // by default
if (fCurDataOffset == 1) { // case 1 or 2
if (fNumValidDataBytes - 1 &= fMaxSize) { // case 1
memmove(fTo, &fInputBuffer[1], fNumValidDataBytes - 1);
fFrameSize = fNumValidDataBytes - 1;
fCurDataOffset = fNumValidDataB
} else { // case 2
// We need to send the NAL unit data as FU-A packets.
Deliver the first
// packet now.
Note that we add FU indicator and FU header bytes to the front
// of the packet (reusing the existing NAL header byte for the FU header).
fInputBuffer[0] = (fInputBuffer[1] & 0xE0) | 28; // FU indicator
fInputBuffer[1] = 0x80 | (fInputBuffer[1] & 0x1F); // FU header (with S bit)
memmove(fTo, fInputBuffer, fMaxSize);
fFrameSize = fMaxS
fCurDataOffset += fMaxSize - 1;
fLastFragmentCompletedNALUnit = F
} else { // case 3
// We are sending this NAL unit data as FU-A packets.
We've already sent the
// first packet (fragment).
Now, send the next fragment.
Note that we add
// FU indicator and FU header bytes to the front.
(We reuse these bytes that
// we already sent for the first fragment, but clear the S bit, and add the E
// bit if this is the last fragment.)
fInputBuffer[fCurDataOffset - 2] = fInputBuffer[0]; // FU indicator
fInputBuffer[fCurDataOffset - 1] = fInputBuffer[1] & ~0x80; // FU header (no S bit)
unsigned numBytesToSend = 2 + fNumValidDataBytes - fCurDataO
if (numBytesToSend & fMaxSize) {
// We can't send all of the remaining data this time:
numBytesToSend = fMaxS
fLastFragmentCompletedNALUnit = F
// This is the last fragment:
fInputBuffer[fCurDataOffset - 1] |= 0x40; // set the E bit in the FU header
fNumTruncatedBytes = fSaveNumTruncatedB
memmove(fTo, &fInputBuffer[fCurDataOffset - 2], numBytesToSend);
fFrameSize = numBytesToS
fCurDataOffset += numBytesToSend - 2;
if (fCurDataOffset &= fNumValidDataBytes) {
// We're done with this data.
Reset the pointers for receiving new data:
fNumValidDataBytes = fCurDataOffset = 1;
// Complete delivery to the client:
FramedSource::afterGetting(this);
如果输入缓冲中没有数据,调用fInputSource-&getNextFrame(),fInputSource是H264VideoStreamFramer,H264VideoStreamFramer的getNextFrame()会调用H264VideoStreamParser的parser(),parser()又调用ByteStreamFileSource获取数据,然后分析,parser()完成后会调用:
void H264FUAFragmenter::afterGettingFrame1(
unsigned frameSize,
unsigned numTruncatedBytes,
struct timeval presentationTime,
unsigned durationInMicroseconds)
<li class="al
&&&&推荐文章:
【上篇】【下篇】}

我要回帖

更多关于 live555 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信