2012-01-24 26 views
7

ile geliştiriyorum Gelişmekte olduğum küçük bir artırılmış gerçeklik uygulamasına sahibim ve kullanıcının bir düğmenin veya zamanlayıcının bir dokunuşuyla gördüğü bir ekran görüntüsünün nasıl kaydedileceğini öğrenmek istiyorum.iPhone, artırılmış gerçeklik ekran görüntüsünü AVCaptureVideoPreviewLayer

Uygulama, canlı kamera beslemesinin başka bir UIView'nin üzerine yerleştirilmesiyle çalışır. Ekran düğmesini güç düğmesi + ana ekran düğmesini kullanarak kaydedebilirim, bunlar kamera rulosuna kaydedilir. Ancak, pencerenin kendisini kaydetmesini istesem bile, Apple AVCaptureVideoPreviewLayer öğesini oluşturmaz. Önizleme katmanının bulunduğu şeffaf bir tuval parçası yaratacaktır.

Bir artırılmış gerçeklik uygulamasının saydamlık ve alt görünümler de dahil olmak üzere ekran görüntülerini kaydetmesi için uygun yol nedir?

//displaying a live preview on one of the views 
    -(void)startCapture 
{ 
     captureSession = [[AVCaptureSession alloc] init]; 
     AVCaptureDevice *audioCaptureDevice = nil; 

     //   AVCaptureDevice *audioCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; 

     NSArray *videoDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]; 

     for (AVCaptureDevice *device in videoDevices) { 

      if(useFrontCamera){ 
       if (device.position == AVCaptureDevicePositionFront) { 
        //FRONT-FACING CAMERA EXISTS 
        audioCaptureDevice = device; 
        break; 
       } 
      }else{ 
       if (device.position == AVCaptureDevicePositionBack) { 
        //Rear-FACING CAMERA EXISTS 
        audioCaptureDevice = device; 
        break; 
       } 
      } 

     } 


     NSError *error = nil; 
     AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioCaptureDevice error:&error]; 
     if (audioInput) { 
      [captureSession addInput:audioInput]; 
     } 
     else { 
      // Handle the failure. 
     } 


if([captureSession canAddOutput:captureOutput]){ 
captureOutput = [[AVCaptureVideoDataOutput alloc] init]; 
    [captureOutput setAlwaysDiscardsLateVideoFrames:YES]; 
    [captureOutput setSampleBufferDelegate:self queue:queue]; 
    [captureOutput setVideoSettings:videoSettings]; 

    dispatch_release(queue); 
}else{ 
//handle failure 
} 

     previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession]; 
     UIView *aView = arOverlayView; 
     previewLayer.frame =CGRectMake(0,0, arOverlayView.frame.size.width,arOverlayView.frame.size.height); // Assume you want the preview layer to fill the view. 

     [aView.layer addSublayer:previewLayer]; 
     [captureSession startRunning]; 


} 

//ask the entire window to draw itself in a graphics context. This call will not render 

// AVCaptureVideoPreviewLayer. UIImageView veya GL tabanlı bir görünüm ile değiştirilmelidir.

// Eğer gibi görünüm denetleyicisi eklemek gerekir: - dinamik güncellenmesi UIImageView yaratmak için kod aşağıdakilere bakın // (boşluk) İşte görüntüyü oluşturmak için kod {

UIGraphicsBeginImageContext(appDelegate.window.bounds.size); 

    [appDelegate.window.layer renderInContext:UIGraphicsGetCurrentContext()]; 

    UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext(); 
    UIGraphicsEndImageContext(); 

    UIImageWriteToSavedPhotosAlbum(screenshot, self, 
            @selector(image:didFinishSavingWithError:contextInfo:), nil); 


} 


//image saved to camera roll callback 
- (void)image:(UIImage *)image didFinishSavingWithError:(NSError *)error 
    contextInfo:(void *)contextInfo 
{ 
    // Was there an error? 
    if (error != NULL) 
    { 
     // Show error message... 
     NSLog(@"save failed"); 

    } 
    else // No errors 
    { 
     NSLog(@"save successful"); 
     // Show message image successfully saved 
    } 
} 

saveScreenshot

: kamera çıkışına temsilci

-(void)activateCameraFeed 
{ 
//this is the code responsible for capturing feed for still image processing 
dispatch_queue_t queue = dispatch_queue_create("com.AugmentedRealityGlamour.ImageCaptureQueue", NULL); 

    captureOutput = [[AVCaptureVideoDataOutput alloc] init]; 
    [captureOutput setAlwaysDiscardsLateVideoFrames:YES]; 
    [captureOutput setSampleBufferDelegate:self queue:queue]; 
    [captureOutput setVideoSettings:videoSettings]; 

    dispatch_release(queue); 

//......configure audio feed, add inputs and outputs 

} 

//buffer delegate callback 
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection 
{ 
    if (ignoreImageStream) 
     return; 
     [self performImageCaptureFrom:sampleBuffer]; 

} 

bir UIImage oluşturma buffereed verilerin bildirilmesini

- (void) performImageCaptureFrom:(CMSampleBufferRef)sampleBuffer 
{ 
    CVImageBufferRef imageBuffer; 

    if (CMSampleBufferGetNumSamples(sampleBuffer) != 1) 
     return; 
    if (!CMSampleBufferIsValid(sampleBuffer)) 
     return; 
    if (!CMSampleBufferDataIsReady(sampleBuffer)) 
     return; 

    imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 

    if (CVPixelBufferGetPixelFormatType(imageBuffer) != kCVPixelFormatType_32BGRA) 
     return; 

    CVPixelBufferLockBaseAddress(imageBuffer,0); 

    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); 
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
    size_t width = CVPixelBufferGetWidth(imageBuffer); 
    size_t height = CVPixelBufferGetHeight(imageBuffer); 

    CGImageRef newImage = nil; 

    if (cameraDeviceSetting == CameraDeviceSetting640x480) 
    { 
     CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 
     CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 
     newImage = CGBitmapContextCreateImage(newContext); 
     CGColorSpaceRelease(colorSpace); 
     CGContextRelease(newContext); 
    } 
    else 
    { 
     uint8_t *tempAddress = malloc(640 * 4 * 480); 
     memcpy(tempAddress, baseAddress, bytesPerRow * height); 
     baseAddress = tempAddress; 
     CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 
     CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst); 
     newImage = CGBitmapContextCreateImage(newContext); 
     CGContextRelease(newContext); 
     newContext = CGBitmapContextCreate(baseAddress, 640, 480, 8, 640*4, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 
     CGContextScaleCTM(newContext, (CGFloat)640/(CGFloat)width, (CGFloat)480/(CGFloat)height); 
     CGContextDrawImage(newContext, CGRectMake(0,0,640,480), newImage); 
     CGImageRelease(newImage); 
     newImage = CGBitmapContextCreateImage(newContext); 
     CGColorSpaceRelease(colorSpace); 
     CGContextRelease(newContext); 
     free(tempAddress); 
    } 

    if (newImage != nil) 
    { 

//modified for iOS5.0 with ARC  
    tempImage = [[UIImage alloc] initWithCGImage:newImage scale:(CGFloat)1.0 orientation:cameraImageOrientation]; 
    CGImageRelease(newImage); 

//this call creates the illusion of a preview layer, while we are actively switching images created with this method 
     [self performSelectorOnMainThread:@selector(newCameraImageNotification:) withObject:tempImage waitUntilDone:YES]; 
    } 

    CVPixelBufferUnlockBaseAddress(imageBuffer,0); 
} 

güncelleme aslında bir grafik bağlamda kılınabilir bir UIView ile arayüz:

- (void) newCameraImageNotification:(UIImage*)newImage 
{ 
    if (newImage == nil) 
     return; 

     [arOverlayView setImage:newImage]; 
//or do more advanced processing of the image 
} 
+0

ekranda ne bir anlık isteyen programlı yerine ev/gücünü baskı yapıyorlar: Açık/GLES1 ekranını kapma için UIView tabanlı ekran ve bir kapma için bir işlevi yoktur? Yine de kodu göndereceğim :) –

+0

Lütfen kontrol edin: http://stackoverflow.com/questions/3397899/avcapturevideopreviewlayer-taking-a-snapshot/13576530#13576530. Umarım yardım et! –

cevap

5

ekranda ne bir anlık eksik olan, bu ben birinde yapıyorum budur benim kamera uygulamaları. Bu koda uzun süre dokunmadım, bu yüzden daha iyi bir 5.0 yol var ama bu 1 milyondan fazla indirmeyle dolu.

// 
// ScreenCapture.m 
// LiveEffectsCam 
// 
// Created by John Carter on 10/8/10. 
// 

#import "ScreenCapture.h" 

#import <QuartzCore/CABase.h> 
#import <QuartzCore/CATransform3D.h> 
#import <QuartzCore/CALayer.h> 
#import <QuartzCore/CAScrollLayer.h> 

#import <OpenGLES/EAGL.h> 
#import <OpenGLES/ES1/gl.h> 
#import <OpenGLES/ES1/glext.h> 
#import <QuartzCore/QuartzCore.h> 
#import <OpenGLES/EAGLDrawable.h> 


@implementation ScreenCapture 

+ (UIImage *) GLViewToImage:(GLView *)glView 
{ 
    UIImage *glImage = [GLView snapshot:glView]; // returns an autoreleased image 
    return glImage; 
} 

+ (UIImage *) GLViewToImage:(GLView *)glView withOverlayImage:(UIImage *)overlayImage 
{ 
    UIImage *glImage = [GLView snapshot:glView]; // returns an autoreleased image 

    // Merge Image and Overlay 
    // 
    CGRect imageRect = CGRectMake((CGFloat)0.0, (CGFloat)0.0, glImage.size.width*glImage.scale, glImage.size.height*glImage.scale); 
    CGImageRef overlayCopy = CGImageCreateCopy(overlayImage.CGImage); 
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 
    CGContextRef context = CGBitmapContextCreate(NULL, (int)glImage.size.width*glImage.scale, (int)glImage.size.height*glImage.scale, 8, (int)glImage.size.width*4*glImage.scale, colorSpace, kCGImageAlphaPremultipliedLast); 
    CGContextDrawImage(context, imageRect, glImage.CGImage); 
    CGContextDrawImage(context, imageRect, overlayCopy); 
    CGImageRef newImage = CGBitmapContextCreateImage(context); 
    UIImage *combinedViewImage = [[[UIImage alloc] initWithCGImage:newImage] autorelease]; 
    CGImageRelease(newImage); 
    CGImageRelease(overlayCopy); 
    CGContextRelease(context); 
    CGColorSpaceRelease(colorSpace); 

    return combinedViewImage; 
} 

+ (UIImage *) UIViewToImage:(UIView *)view withOverlayImage:(UIImage *)overlayImage 
    { 
    UIImage *viewImage = [ScreenCapture UIViewToImage:view]; // returns an autoreleased image 

    // Merge Image and Overlay 
    // 
    CGRect imageRect = CGRectMake((CGFloat)0.0, (CGFloat)0.0, viewImage.size.width*viewImage.scale, viewImage.size.height*viewImage.scale); 
    CGImageRef overlayCopy = CGImageCreateCopy(overlayImage.CGImage); 
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 
    CGContextRef context = CGBitmapContextCreate(NULL, (int)viewImage.size.width*viewImage.scale, (int)viewImage.size.height*viewImage.scale, 8, (int)viewImage.size.width*4*viewImage.scale, colorSpace, kCGImageAlphaPremultipliedLast); 
    CGContextDrawImage(context, imageRect, viewImage.CGImage); 
    CGContextDrawImage(context, imageRect, overlayCopy); 
    CGImageRef newImage = CGBitmapContextCreateImage(context); 
    UIImage *combinedViewImage = [[[UIImage alloc] initWithCGImage:newImage] autorelease]; 
    CGImageRelease(newImage); 
    CGImageRelease(overlayCopy); 
    CGContextRelease(context); 
    CGColorSpaceRelease(colorSpace); 

    return combinedViewImage; 
} 



+ (UIImage *) UIViewToImage:(UIView *)view 
{ 
    // Create a graphics context with the target size 
    // On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration 
    // On iOS prior to 4, fall back to use UIGraphicsBeginImageContext 
    // 
    // CGSize imageSize = [[UIScreen mainScreen] bounds].size; 
    CGSize imageSize = CGSizeMake((CGFloat)480.0, (CGFloat)640.0);  // camera image size 

    if (NULL != UIGraphicsBeginImageContextWithOptions) 
     UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0); 
    else 
     UIGraphicsBeginImageContext(imageSize); 

    CGContextRef context = UIGraphicsGetCurrentContext(); 

    // Start with the view... 
    // 
    CGContextSaveGState(context); 
    CGContextTranslateCTM(context, [view center].x, [view center].y); 
    CGContextConcatCTM(context, [view transform]); 
    CGContextTranslateCTM(context,-[view bounds].size.width * [[view layer] anchorPoint].x,-[view bounds].size.height * [[view layer] anchorPoint].y); 
    [[view layer] renderInContext:context]; 
    CGContextRestoreGState(context); 

    // ...then repeat for every subview from back to front 
    // 
    for (UIView *subView in [view subviews]) 
    { 
     if ([subView respondsToSelector:@selector(screen)]) 
      if ([(UIWindow *)subView screen] == [UIScreen mainScreen]) 
       continue; 

     CGContextSaveGState(context); 
     CGContextTranslateCTM(context, [subView center].x, [subView center].y); 
     CGContextConcatCTM(context, [subView transform]); 
     CGContextTranslateCTM(context,-[subView bounds].size.width * [[subView layer] anchorPoint].x,-[subView bounds].size.height * [[subView layer] anchorPoint].y); 
     [[subView layer] renderInContext:context]; 
     CGContextRestoreGState(context); 
    } 

    UIImage *image = UIGraphicsGetImageFromCurrentImageContext(); // autoreleased image 

    UIGraphicsEndImageContext(); 

    return image; 
} 

+ (UIImage *) snapshot:(GLView *)eaglview 
{ 
    NSInteger x = 0; 
    NSInteger y = 0; 
    NSInteger width = [eaglview backingWidth]; 
    NSInteger height = [eaglview backingHeight]; 
    NSInteger dataLength = width * height * 4; 

    NSUInteger i; 
    for (i=0; i<100; i++) 
    { 
     glFlush(); 
     CFRunLoopRunInMode(kCFRunLoopDefaultMode, (float)1.0/(float)60.0, FALSE); 
    } 

    GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte)); 

    // Read pixel data from the framebuffer 
    // 
    glPixelStorei(GL_PACK_ALIGNMENT, 4); 
    glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data); 

    // Create a CGImage with the pixel data 
    // If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel 
    // otherwise, use kCGImageAlphaPremultipliedLast 
    // 
    CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL); 
    CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB(); 
    CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast, ref, NULL, true, kCGRenderingIntentDefault); 

    // OpenGL ES measures data in PIXELS 
    // Create a graphics context with the target size measured in POINTS 
    // 
    NSInteger widthInPoints; 
    NSInteger heightInPoints; 

    if (NULL != UIGraphicsBeginImageContextWithOptions) 
    { 
     // On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration 
     // Set the scale parameter to your OpenGL ES view's contentScaleFactor 
     // so that you get a high-resolution snapshot when its value is greater than 1.0 
     // 
     CGFloat scale = eaglview.contentScaleFactor; 
     widthInPoints = width/scale; 
     heightInPoints = height/scale; 
     UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale); 
    } 
    else 
    { 
     // On iOS prior to 4, fall back to use UIGraphicsBeginImageContext 
     // 
     widthInPoints = width; 
     heightInPoints = height; 
     UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints)); 
    } 

    CGContextRef cgcontext = UIGraphicsGetCurrentContext(); 

    // UIKit coordinate system is upside down to GL/Quartz coordinate system 
    // Flip the CGImage by rendering it to the flipped bitmap context 
    // The size of the destination area is measured in POINTS 
    // 
    CGContextSetBlendMode(cgcontext, kCGBlendModeCopy); 
    CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref); 

    // Retrieve the UIImage from the current context 
    UIImage *image = UIGraphicsGetImageFromCurrentImageContext(); // autoreleased image 

    UIGraphicsEndImageContext(); 

    // Clean up 
    free(data); 
    CFRelease(ref); 
    CFRelease(colorspace); 
    CGImageRelease(iref); 

    return image; 
} 

@end 
+0

Uygulamanızı kontrol ettim, harika! + (UIImage *) UIViewToImage: (UIView *) görüntülenme yöntemi şimdi deneyin –

+1

Bu uygulama işe yaramadı çünkü bir AVCaptureVideoPreviewLayer kullanıyorum, ancak uygulamanız canlı yayın oluşturma için bir GLView kullanıyor. Görüntüleme kodunu kaydetme sırasında önizleme katmanı görüntüde görüntülenmiyor –

+0

Hem UIView tabanlı hem de OpenGL tabanlı ekran yakalamaları için yöntemler var. Uyguladığınızdan emin değilim çünkü uygulamam AVCaptureVideoPreviewLayer'ı da kullanıyor. –

İlgili konular