Android Camera Integration with Kotlin: A Step-by-Step Guide
Android Camera Integration with Kotlin: A Step-by-Step Guide
1. Project Configuration
Follow these steps to set up your Android Studio project:
- Open Android Studio (latest version recommended)
- Create a new project with the "Empty Compose Activity" template
- Configure your project with these settings:
- Language: Kotlin
- Minimum SDK: API 21 (Android 5.0) or higher
- Build configuration: Gradle with Kotlin DSL (build.gradle.kts)
2. Required Camera Permissions
Add these essential permissions to your AndroidManifest.xml
file:
<!-- Camera permission --> <uses-permission android:name="android.permission.CAMERA"/> <!-- Ensures your app only shows on devices with a camera --> <uses-feature android:name="android.hardware.camera.any"/> <!-- Required for video recording --> <uses-permission android:name="android.permission.RECORD_AUDIO"/> <!-- Optional: for saving media to external storage (Android 10+) --> <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" android:maxSdkVersion="32" /> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" android:maxSdkVersion="32" /> <!-- For Android 13+ --> <uses-permission android:name="android.permission.READ_MEDIA_IMAGES" /> <uses-permission android:name="android.permission.READ_MEDIA_VIDEO" />
Note: You'll implement runtime permission handling later in this tutorial using the Accompanist Permissions library.
3. CameraX DependencieseraX and Jetpack Compose (2025)
Are you struggling with camera implementation in your Android app? In this comprehensive tutorial, you'll master Android camera integration using the powerful CameraX API with Kotlin and Jetpack Compose. Whether you're building a social media app, a document scanner, or implementing AR features, this step-by-step guide covers everything you need to know.
By the end of this tutorial, you'll know how to:
- Create a professional camera preview in your Compose UI
- Capture high-quality photos and videos efficiently
- Implement machine learning features with ML Kit integration
- Apply advanced camera extensions like HDR, Night Mode, and Portrait/Bokeh effects
- Handle camera lifecycles and errors like a professional developer
Difficulty level: Intermediate Prerequisites: Basic knowledge of Kotlin and Jetpack Compose Android API compatibility: Android 5.0 (API 21) and above
🔧 Why Choose CameraX for Your Android App?
Modern Android camera development has historically been complicated, with inconsistent behavior across different device manufacturers. The CameraX API solves these problems and delivers these key advantages:
- Cross-device compatibility: Works consistently on 90%+ of Android devices (API 21/Android 5.0 and higher)
- Lifecycle awareness: Camera resources are automatically managed based on your app's lifecycle
- Use-case driven API: Ready-to-use components for Preview, ImageCapture, VideoCapture, and ImageAnalysis
- Compose integration: Seamless integration with modern Jetpack Compose UI
- ML Kit compatibility: Built-in support for machine learning use cases (barcode scanning, face detection, etc.)
- Advanced photography: Access to vendor extensions like HDR, Night mode, and Portrait/Bokeh effects
"CameraX is the recommended camera library for most Android developers today." - Android Developer Documentation
📦 Setting Up Your CameraX Project
1. Project Setup
Create a Kotlin project in Android Studio with Jetpack Compose enabled and minSdk 21
or higher.
2. Permissions
In AndroidManifest.xml
:
<uses-permission android:name="android.permission.CAMERA"/> <uses-feature android:name="android.hardware.camera.any"/> <uses-permission android:name="android.permission.RECORD_AUDIO"/>
You’ll also need runtime permission handling in your app logic (via Accompanist Permissions or standard checks).
3. Dependencies
Add the following dependencies to your build.gradle.kts
(app module):
// Define CameraX version - use the latest stable version val cameraxVersion = "1.4.2" dependencies { // Core CameraX libraries implementation("androidx.camera:camera-core:$cameraxVersion") implementation("androidx.camera:camera-camera2:$cameraxVersion") implementation("androidx.camera:camera-lifecycle:$cameraxVersion") implementation("androidx.camera:camera-view:$cameraxVersion") // For video capture functionality implementation("androidx.camera:camera-video:$cameraxVersion") // For ML Kit integration implementation("androidx.camera:camera-mlkit-vision:$cameraxVersion") // For HDR, Night Mode, Portrait features implementation("androidx.camera:camera-extensions:$cameraxVersion") // For Jetpack Compose integration implementation("androidx.camera:camera-compose:$cameraxVersion") // For permission handling (optional but recommended) implementation("com.google.accompanist:accompanist-permissions:0.33.2-alpha") }
After adding these dependencies, sync your project to ensure everything is correctly set up.
📷 Building a Camera Preview with Image Capture in Compose
Let's start with the most common use case: displaying a live camera preview and capturing photos when the user taps a button.
Here's the complete implementation of a camera screen with photo capture functionality:
import android.net.Uri import android.util.Log import androidx.camera.core.* import androidx.camera.lifecycle.ProcessCameraProvider import androidx.camera.view.PreviewView import androidx.compose.foundation.layout.* import androidx.compose.material.icons.Icons import androidx.compose.material.icons.filled.CameraAlt import androidx.compose.material.icons.filled.FlipCameraAndroid import androidx.compose.material3.FloatingActionButton import androidx.compose.material3.Icon import androidx.compose.runtime.* import androidx.compose.ui.Alignment import androidx.compose.ui.Modifier import androidx.compose.ui.platform.LocalContext import androidx.compose.ui.platform.LocalLifecycleOwner import androidx.compose.ui.unit.dp import androidx.compose.ui.viewinterop.AndroidView import androidx.core.content.ContextCompat import java.io.File import java.util.concurrent.Executor import kotlin.coroutines.resume import kotlin.coroutines.suspendCoroutine /** * A full-screen camera implementation with image capture functionality * * @param onImageCaptured Callback triggered when an image is successfully captured * @param onError Callback triggered on camera errors */ @Composable fun CameraScreen( onImageCaptured: (Uri) -> Unit, onError: (ImageCaptureException) -> Unit ) { // Camera state val context = LocalContext.current val lifecycleOwner = LocalLifecycleOwner.current var lensFacing by remember { mutableStateOf(CameraSelector.LENS_FACING_BACK) } val cameraSelector = remember(lensFacing) { CameraSelector.Builder().requireLensFacing(lensFacing).build() } // Initialize CameraX use cases val cameraProviderFuture = remember { ProcessCameraProvider.getInstance(context) } // Configure image capture with high-quality settings val imageCapture = remember { ImageCapture.Builder() .setCaptureMode(ImageCapture.CAPTURE_MODE_MAXIMIZE_QUALITY) .setTargetAspectRatio(AspectRatio.RATIO_16_9) .setFlashMode(ImageCapture.FLASH_MODE_AUTO) .build() } // Camera preview using AndroidView to integrate with Compose Box(modifier = Modifier.fillMaxSize()) { AndroidView( factory = { ctx -> // Configure preview view val previewView = PreviewView(ctx).apply { implementationMode = PreviewView.ImplementationMode.COMPATIBLE scaleType = PreviewView.ScaleType.FILL_CENTER } // Setup and bind use cases when camera provider is available cameraProviderFuture.addListener({ val cameraProvider = cameraProviderFuture.get() // Configure preview use case val preview = Preview.Builder() .setTargetAspectRatio(AspectRatio.RATIO_16_9) .build() .also { it.setSurfaceProvider(previewView.surfaceProvider) } try { // Unbind previous use cases before rebinding cameraProvider.unbindAll() // Bind camera to lifecycle cameraProvider.bindToLifecycle( lifecycleOwner, cameraSelector, preview, imageCapture ) } catch (e: Exception) { Log.e("CameraScreen", "Camera binding failed", e) } }, ContextCompat.getMainExecutor(ctx)) previewView }, modifier = Modifier.fillMaxSize() ) // Camera controls overlay Column( modifier = Modifier.fillMaxSize(), verticalArrangement = Arrangement.Bottom, horizontalAlignment = Alignment.CenterHorizontally ) { Row( modifier = Modifier .fillMaxWidth() .padding(bottom = 32.dp), horizontalArrangement = Arrangement.SpaceEvenly ) { // Camera flip button FloatingActionButton( onClick = { lensFacing = if (lensFacing == CameraSelector.LENS_FACING_BACK) CameraSelector.LENS_FACING_FRONT else CameraSelector.LENS_FACING_BACK } ) { Icon(Icons.Default.FlipCameraAndroid, contentDescription = "Switch camera") } // Capture button FloatingActionButton( onClick = { // Create output file in cache directory val photoFile = File( context.cacheDir, "IMG_${System.currentTimeMillis()}.jpg" ) // Configure output options with metadata val outputOptions = ImageCapture.OutputFileOptions.Builder(photoFile) .setMetadata( ImageCapture.Metadata().apply { isReversedHorizontal = lensFacing == CameraSelector.LENS_FACING_FRONT } ).build() // Capture the image imageCapture.takePicture( outputOptions, ContextCompat.getMainExecutor(context), object : ImageCapture.OnImageSavedCallback { override fun onImageSaved(output: ImageCapture.OutputFileResults) { // Return the image URI to the caller output.savedUri?.let { uri -> onImageCaptured(uri) } ?: onImageCaptured(Uri.fromFile(photoFile)) } override fun onError(exception: ImageCaptureException) { Log.e("CameraScreen", "Image capture failed", exception) onError(exception) } } ) }, modifier = Modifier.size(72.dp) ) { Icon( Icons.Default.CameraAlt, contentDescription = "Take photo", modifier = Modifier.size(36.dp) ) } } Spacer(modifier = Modifier.height(36.dp)) } } }
🎥 Implementing Video Recording with CameraX
Recording video is another essential feature in many camera applications. Here's how to add professional video recording capability to your app:
Creating a Video Recording Composable
import android.Manifest import android.content.ContentValues import android.os.Build import android.provider.MediaStore import android.util.Log import androidx.camera.core.* import androidx.camera.lifecycle.ProcessCameraProvider import androidx.camera.video.* import androidx.camera.video.VideoCapture import androidx.camera.view.PreviewView import androidx.compose.foundation.layout.* import androidx.compose.material.icons.Icons import androidx.compose.material.icons.filled.* import androidx.compose.material3.* import androidx.compose.runtime.* import androidx.compose.ui.Alignment import androidx.compose.ui.Modifier import androidx.compose.ui.graphics.Color import androidx.compose.ui.platform.LocalContext import androidx.compose.ui.platform.LocalLifecycleOwner import androidx.compose.ui.unit.dp import androidx.compose.ui.viewinterop.AndroidView import androidx.core.content.ContextCompat import androidx.core.content.PermissionChecker import androidx.lifecycle.LifecycleOwner import kotlinx.coroutines.launch import java.io.File import java.text.SimpleDateFormat import java.util.* import java.util.concurrent.Executor @Composable fun VideoRecordingScreen( onVideoSaved: (Uri) -> Unit, onError: (String) -> Unit ) { val context = LocalContext.current val lifecycleOwner = LocalLifecycleOwner.current val scope = rememberCoroutineScope() // State for recording var isRecording by remember { mutableStateOf(false) } var recording: Recording? by remember { mutableStateOf(null) } // Date formatter for file names val dateFormat = SimpleDateFormat("yyyy-MM-dd-HH-mm-ss-SSS", Locale.US) // Create use cases var videoCapture: VideoCapture<Recorder>? by remember { mutableStateOf(null) } val preview = remember { Preview.Builder().build() } // Create preview view Box(modifier = Modifier.fillMaxSize()) { AndroidView( factory = { ctx -> val previewView = PreviewView(ctx).apply { implementationMode = PreviewView.ImplementationMode.COMPATIBLE } val cameraProviderFuture = ProcessCameraProvider.getInstance(ctx) cameraProviderFuture.addListener({ val cameraProvider = cameraProviderFuture.get() // Set up the preview use case preview.setSurfaceProvider(previewView.surfaceProvider) // Set up video capture use case val recorder = Recorder.Builder() .setQualitySelector(QualitySelector.from( Quality.HIGHEST, FallbackStrategy.higherQualityOrLowerThan(Quality.SD) )) .build() videoCapture = VideoCapture.withOutput(recorder) try { // Unbind all use cases and rebind with new ones cameraProvider.unbindAll() cameraProvider.bindToLifecycle( lifecycleOwner, CameraSelector.DEFAULT_BACK_CAMERA, preview, videoCapture ) } catch (e: Exception) { Log.e("VideoRecording", "Use case binding failed", e) onError("Camera initialization failed: ${e.message}") } }, ContextCompat.getMainExecutor(ctx)) previewView }, modifier = Modifier.fillMaxSize() ) // Record button Box( modifier = Modifier .fillMaxSize() .padding(bottom = 32.dp), contentAlignment = Alignment.BottomCenter ) { FloatingActionButton( onClick = { if (isRecording) { // Stop recording recording?.stop() isRecording = false } else { // Start recording val videoCapture = videoCapture ?: return@FloatingActionButton // Create output file val name = "VIDEO_${dateFormat.format(Date())}" val contentValues = ContentValues().apply { put(MediaStore.Video.Media.DISPLAY_NAME, name) put(MediaStore.Video.Media.MIME_TYPE, "video/mp4") if (Build.VERSION.SDK_INT > Build.VERSION_CODES.P) { put(MediaStore.Video.Media.RELATIVE_PATH, "Movies/CameraX-Video") } } val mediaStoreOutput = MediaStoreOutputOptions.Builder( context.contentResolver, MediaStore.Video.Media.EXTERNAL_CONTENT_URI ) .setContentValues(contentValues) .build() // Configure options recording = videoCapture.output .prepareRecording(context, mediaStoreOutput) .apply { if (PermissionChecker.checkSelfPermission( context, Manifest.permission.RECORD_AUDIO ) == PermissionChecker.PERMISSION_GRANTED ) { withAudioEnabled() } } .start(ContextCompat.getMainExecutor(context)) { event -> when(event) { is VideoRecordEvent.Start -> { isRecording = true } is VideoRecordEvent.Finalize -> { isRecording = false if (event.hasError()) { recording = null onError("Video recording failed: ${event.error}") } else { event.outputResults.outputUri?.let { uri -> onVideoSaved(uri) } } } } } } }, containerColor = if (isRecording) Color.Red else MaterialTheme.colorScheme.primary, modifier = Modifier.size(72.dp) ) { Icon( if (isRecording) Icons.Default.Stop else Icons.Default.Videocam, contentDescription = if (isRecording) "Stop recording" else "Start recording", modifier = Modifier.size(36.dp) ) } } } }
🧠 Adding Machine Learning with CameraX and ML Kit
One of the most powerful features of CameraX is its seamless integration with ML Kit. This allows you to add intelligent features like barcode scanning, face detection, text recognition, and more to your camera application.
Implementing Barcode Scanning
Here's how to implement a real-time QR code and barcode scanner in your app:
import androidx.camera.core.* import androidx.camera.mlkit.vision.MlKitAnalyzer import androidx.camera.view.CameraController import androidx.camera.view.LifecycleCameraController import androidx.compose.foundation.border import androidx.compose.foundation.layout.* import androidx.compose.material3.* import androidx.compose.runtime.* import androidx.compose.ui.Alignment import androidx.compose.ui.Modifier import androidx.compose.ui.graphics.Color import androidx.compose.ui.platform.LocalContext import androidx.compose.ui.platform.LocalLifecycleOwner import androidx.compose.ui.unit.dp import androidx.compose.ui.viewinterop.AndroidView import androidx.core.content.ContextCompat import com.google.mlkit.vision.barcode.BarcodeScanning import com.google.mlkit.vision.barcode.common.Barcode @Composable fun BarcodeScanner( onBarcodeDetected: (String) -> Unit ) { val context = LocalContext.current val lifecycleOwner = LocalLifecycleOwner.current // State for detected barcode value var barcodeValue by remember { mutableStateOf<String?>(null) } // Create barcode scanner val barcodeScanner = remember { BarcodeScanning.getClient() } // Use CameraController from CameraX val cameraController = remember { LifecycleCameraController(context).apply { setEnabledUseCases(CameraController.IMAGE_ANALYSIS) setImageAnalysisAnalyzer( ContextCompat.getMainExecutor(context), MlKitAnalyzer( listOf(barcodeScanner), CameraController.COORDINATE_SYSTEM_VIEW_REFERENCED, ContextCompat.getMainExecutor(context) ) { analyzedImage -> // Process detection results val barcodeResults = analyzedImage.getResults(barcodeScanner) if (barcodeResults != null && barcodeResults.size > 0) { // Get first valid barcode for (barcode in barcodeResults) { // We're interested in display-value, which shows the decoded text barcode.displayValue?.let { value -> if (value != barcodeValue) { barcodeValue = value onBarcodeDetected(value) } } } } } ) } } Box(modifier = Modifier.fillMaxSize()) { // Camera preview AndroidView( factory = { androidx.camera.view.PreviewView(it).apply { controller = cameraController implementationMode = androidx.camera.view.PreviewView.ImplementationMode.PERFORMANCE lifecycleOwner.lifecycle.addObserver(cameraController) } }, modifier = Modifier.fillMaxSize() ) // Scanner overlay Box( modifier = Modifier .size(300.dp) .align(Alignment.Center) .border(2.dp, Color.White) ) // Display detected barcode value barcodeValue?.let { value -> Card( modifier = Modifier .align(Alignment.BottomCenter) .padding(16.dp) .fillMaxWidth() ) { Column( modifier = Modifier.padding(16.dp) ) { Text( text = "Detected Barcode:", style = MaterialTheme.typography.bodyMedium ) Text( text = value, style = MaterialTheme.typography.bodyLarge ) } } } } }
Adding Other ML Kit Features
You can easily extend this approach to implement other ML Kit features:
- Face Detection: Great for selfie apps or augmented reality effects
- Text Recognition: Perfect for document scanning apps
- Object Detection: Identify objects in the camera view
- Pose Detection: Track body positions for fitness apps
For example, here's how to implement face detection:
import com.google.mlkit.vision.face.FaceDetection import com.google.mlkit.vision.face.FaceDetectorOptions // Create a face detector with desired options val faceDetector = FaceDetection.getClient( FaceDetectorOptions.Builder() .setPerformanceMode(FaceDetectorOptions.PERFORMANCE_MODE_FAST) .setContourMode(FaceDetectorOptions.CONTOUR_MODE_ALL) .build() ) // Use the face detector with MlKitAnalyzer cameraController.setImageAnalysisAnalyzer( ContextCompat.getMainExecutor(context), MlKitAnalyzer( listOf(faceDetector), CameraController.COORDINATE_SYSTEM_VIEW_REFERENCED, ContextCompat.getMainExecutor(context) ) { analyzedImage -> val faceResults = analyzedImage.getResults(faceDetector) // Process detected faces } )
✨ Adding Professional Photography Features with Camera Extensions
CameraX Extensions allow your app to access advanced camera features provided by device manufacturers, such as HDR (High Dynamic Range), Night mode, Portrait/Bokeh effect, and more. These extensions enable your app to take professional-quality photos without needing to implement complex camera algorithms yourself.
Available Camera Extensions
Extension | Description | Common Use Cases |
---|---|---|
HDR | High Dynamic Range | Scenes with bright and dark areas, landscapes, backlit subjects |
Night | Low-light enhancement | Indoor photos, evening/night photography |
Bokeh | Portrait mode with background blur | Portraits, product photography |
Auto | Device-optimized automatic mode | General photography |
Face Retouch | Skin smoothing and enhancement | Selfies, portraits |
Beauty | Advanced face enhancement | Social media photos, selfies |
Implementing Camera Extensions
import androidx.camera.core.* import androidx.camera.extensions.* import androidx.camera.lifecycle.ProcessCameraProvider import androidx.camera.view.PreviewView import androidx.compose.foundation.layout.* import androidx.compose.material3.* import androidx.compose.runtime.* import androidx.compose.ui.Alignment import androidx.compose.ui.Modifier import androidx.compose.ui.platform.LocalContext import androidx.compose.ui.platform.LocalLifecycleOwner import androidx.compose.ui.unit.dp import androidx.compose.ui.viewinterop.AndroidView import androidx.core.content.ContextCompat import kotlinx.coroutines.launch import java.util.concurrent.ExecutorService import java.util.concurrent.Executors @Composable fun ExtendedCameraScreen( onImageCaptured: (Uri) -> Unit, onError: (ImageCaptureException) -> Unit ) { val context = LocalContext.current val lifecycleOwner = LocalLifecycleOwner.current val scope = rememberCoroutineScope() // State for current extension mode var currentExtensionMode by remember { mutableStateOf(ExtensionMode.NONE) } // Camera provider and extensions manager val cameraProviderFuture = remember { ProcessCameraProvider.getInstance(context) } val cameraProvider = remember { mutableStateOf<ProcessCameraProvider?>(null) } val extensionsManager = remember { mutableStateOf<ExtensionsManager?>(null) } // Image capture use case val imageCapture = remember { ImageCapture.Builder() .setCaptureMode(ImageCapture.CAPTURE_MODE_MAXIMIZE_QUALITY) .build() } // Initialize camera provider and extensions manager LaunchedEffect(Unit) { scope.launch { try { cameraProvider.value = cameraProviderFuture.get() extensionsManager.value = ExtensionsManager.getInstanceAsync( context, cameraProvider.value!! ).get() } catch (e: Exception) { // Handle initialization errors } } } // Function to bind camera with selected extension fun bindCameraWithExtension(extensionMode: Int, previewView: PreviewView) { val cameraProvider = cameraProvider.value ?: return val extManager = extensionsManager.value ?: return try { // Check if extension is available if (!extManager.isExtensionAvailable( CameraSelector.DEFAULT_BACK_CAMERA, extensionMode ) ) { // Extension not available, use default mode bindCameraWithExtension(ExtensionMode.NONE, previewView) return } // Get extension-enabled camera selector val cameraSelector = extManager.getExtensionEnabledCameraSelector( CameraSelector.DEFAULT_BACK_CAMERA, extensionMode ) // Create preview use case val preview = Preview.Builder() .build() .also { it.setSurfaceProvider(previewView.surfaceProvider) } // Unbind previous use cases cameraProvider.unbindAll() // Bind camera with extension cameraProvider.bindToLifecycle( lifecycleOwner, cameraSelector, preview, imageCapture ) currentExtensionMode = extensionMode } catch (e: Exception) { // Extension failed, fall back to default mode if (extensionMode != ExtensionMode.NONE) { bindCameraWithExtension(ExtensionMode.NONE, previewView) } } } // Camera UI Box(modifier = Modifier.fillMaxSize()) { // Preview AndroidView( factory = { ctx -> val previewView = PreviewView(ctx).apply { implementationMode = PreviewView.ImplementationMode.COMPATIBLE } cameraProviderFuture.addListener({ bindCameraWithExtension(currentExtensionMode, previewView) }, ContextCompat.getMainExecutor(ctx)) previewView }, modifier = Modifier.fillMaxSize(), update = { previewView -> // Update preview if extension mode changes cameraProvider.value?.let { bindCameraWithExtension(currentExtensionMode, previewView) } } ) // Extension selector and capture button Column( modifier = Modifier.fillMaxSize(), verticalArrangement = Arrangement.Bottom ) { // Extension mode selection Row( modifier = Modifier .fillMaxWidth() .padding(horizontal = 16.dp), horizontalArrangement = Arrangement.SpaceEvenly ) { ExtensionButton("Auto", ExtensionMode.AUTO, currentExtensionMode) { currentExtensionMode = ExtensionMode.AUTO } ExtensionButton("HDR", ExtensionMode.HDR, currentExtensionMode) { currentExtensionMode = ExtensionMode.HDR } ExtensionButton("Night", ExtensionMode.NIGHT, currentExtensionMode) { currentExtensionMode = ExtensionMode.NIGHT } ExtensionButton("Portrait", ExtensionMode.BOKEH, currentExtensionMode) { currentExtensionMode = ExtensionMode.BOKEH } ExtensionButton("Normal", ExtensionMode.NONE, currentExtensionMode) { currentExtensionMode = ExtensionMode.NONE } } Spacer(modifier = Modifier.height(16.dp)) // Capture button Box( modifier = Modifier .fillMaxWidth() .padding(bottom = 24.dp), contentAlignment = Alignment.Center ) { FloatingActionButton( onClick = { // Take picture with current extension mode val photoFile = File( context.cacheDir, "IMG_${System.currentTimeMillis()}.jpg" ) val outputOptions = ImageCapture.OutputFileOptions.Builder(photoFile) .build() imageCapture.takePicture( outputOptions, ContextCompat.getMainExecutor(context), object : ImageCapture.OnImageSavedCallback { override fun onImageSaved(output: ImageCapture.OutputFileResults) { output.savedUri?.let { uri -> onImageCaptured(uri) } ?: onImageCaptured(Uri.fromFile(photoFile)) } override fun onError(exception: ImageCaptureException) { onError(exception) } } ) }, modifier = Modifier.size(72.dp) ) { Icon( imageVector = Icons.Default.CameraAlt, contentDescription = "Take photo", modifier = Modifier.size(36.dp) ) } } Spacer(modifier = Modifier.height(24.dp)) } } } @Composable fun ExtensionButton( name: String, mode: Int, currentMode: Int, onClick: () -> Unit ) { Button( onClick = onClick, colors = ButtonDefaults.buttonColors( containerColor = if (currentMode == mode) MaterialTheme.colorScheme.primary else MaterialTheme.colorScheme.surfaceVariant ) ) { Text(name) } }
Device Compatibility
Camera Extensions availability varies by device manufacturer. Your app should always check if an extension is available before attempting to use it, and provide a graceful fallback to standard camera mode when extensions are not supported.
To check extension availability programmatically:
val extensionsManager = ExtensionsManager.getInstanceAsync(context, cameraProvider).get() // Check if HDR is available on the back camera val isHdrAvailable = extensionsManager.isExtensionAvailable( CameraSelector.DEFAULT_BACK_CAMERA, ExtensionMode.HDR )
🛠 Best Practices: Permission Handling and Error Management
Implementing Runtime Permissions
Modern Android apps require explicit permission handling. Here's how to properly request and manage camera permissions using the Accompanist Permissions library:
import com.google.accompanist.permissions.ExperimentalPermissionsApi import com.google.accompanist.permissions.rememberMultiplePermissionsState import com.google.accompanist.permissions.PermissionStatus import com.google.accompanist.permissions.isGranted @OptIn(ExperimentalPermissionsApi::class) @Composable fun CameraPermissionsScreen( onPermissionsGranted: @Composable () -> Unit ) { // Request camera and microphone permissions val permissionsState = rememberMultiplePermissionsState( listOf( Manifest.permission.CAMERA, Manifest.permission.RECORD_AUDIO ) ) val allPermissionsGranted = permissionsState.permissions.all { it.status == PermissionStatus.Granted } if (allPermissionsGranted) { // Permissions granted, show camera UI onPermissionsGranted() } else { // Permission request UI Column( modifier = Modifier .fillMaxSize() .padding(24.dp), horizontalAlignment = Alignment.CenterHorizontally, verticalArrangement = Arrangement.Center ) { val textToShow = if (permissionsState.shouldShowRationale) { "Camera and microphone access is required to use the camera features of this app." } else { "Camera permission is required for this feature. Please grant the permission." } Icon( imageVector = Icons.Default.CameraAlt, contentDescription = null, modifier = Modifier .size(120.dp) .padding(bottom = 16.dp) ) Text( text = textToShow, textAlign = TextAlign.Center, modifier = Modifier.padding(bottom = 16.dp) ) Button( onClick = { permissionsState.launchMultiplePermissionRequest() } ) { Text("Request Permissions") } } } }
Error Handling Strategies
Robust error handling is crucial for camera applications. Here are the key errors to handle:
-
Camera initialization failures
- Device doesn't have a suitable camera
- Camera is in use by another application
- Insufficient system resources
-
Image capture errors
- Storage issues (no space, permissions)
- Processing errors
-
Video recording issues
- Encoding failures
- File size limitations
- Recording interruptions
Example Error Handler Implementation
fun handleCameraError(error: Exception, context: Context): String { return when (error) { is CameraUnavailableException -> { // Camera is currently unavailable Log.e("CameraX", "Camera unavailable", error) "Camera is unavailable. Another app might be using it." } is ImageCaptureException -> { // Handle based on error code when (error.imageCaptureError) { ImageCapture.ERROR_CAPTURE_FAILED -> "Failed to capture image. Please try again." ImageCapture.ERROR_FILE_IO -> "File error. Please check storage permissions." ImageCapture.ERROR_CAMERA_CLOSED -> "Camera was closed during capture. Please try again." else -> "Failed to save photo. Please try again." } } else -> { Log.e("CameraX", "Unexpected camera error", error) "An unexpected error occurred. Please try again." } } }
Performance Optimization Tips
- Lifecycle Awareness: Always bind camera to the lifecycle to ensure proper resource cleanup
- Resolution Management: Select appropriate resolution based on use case
- Background Processing: Use WorkManager for processing captured media
- Memory Management: Close image proxies after analysis
- Threading: Use appropriate executors for camera operations
✅ Summary: Building Production-Ready Camera Apps
By following this comprehensive guide, you now have the knowledge to implement professional camera features in your Android app using CameraX and Jetpack Compose.
What You've Learned:
- Camera Preview: Display real-time camera feed with Compose integration
- Photo Capture: Take high-quality photos with various capture modes
- Video Recording: Record videos with audio and quality options
- ML Integration: Add intelligence to your camera with ML Kit
- Advanced Photography: Leverage device-specific camera extensions like HDR and Night mode
- Best Practices: Handle permissions, errors, and lifecycle events properly
Key Benefits of CameraX:
- Cross-Device Compatibility: Works consistently on 90%+ of Android devices
- Modern Architecture: Clean, lifecycle-aware implementation
- Simplified Development: Reduce camera code by up to 80% compared to Camera2 API
- Future-Proof: Regular updates and improvements from Google
Next Steps:
Consider exploring these advanced CameraX topics:
- Custom image processing pipelines
- Concurrent use cases (e.g., analyzing while recording)
- Camera extensions vendor integration
- Testing camera implementations
For more information, check out the official CameraX documentation and CameraX release notes.
👨💻 About the Author
This tutorial was created by an experienced Android developer with expertise in camera implementations, ML Kit integration, and Jetpack Compose. If you found this guide helpful, please share it with other developers who might benefit from these camera integration techniques.
📚 Frequently Asked Questions (FAQs)
Q: Do I need to use Camera2 API alongside CameraX?
A: No, CameraX is built on top of Camera2 API, but abstracts away its complexity. You don't need to interact with Camera2 directly unless you have very specific requirements not covered by CameraX.
Q: Does CameraX work on all Android devices?
A: CameraX works on devices running Android 5.0 (API level 21) or higher, which covers approximately 98% of active Android devices. The camera extensions feature requires Android 7.0 (API level 24) or higher.
Q: How do I handle device orientation changes with CameraX?
A: CameraX handles orientation changes automatically in most cases. The captured images and videos will be correctly oriented based on the device orientation at capture time.
Q: Can I use CameraX without Jetpack Compose?
A: Absolutely! CameraX works with both traditional XML-based layouts and Jetpack Compose. This tutorial focuses on Compose integration, but the core CameraX concepts apply to both UI approaches.
Q: How do I handle low memory situations during camera usage?
A: Implement the onLowMemory()
callback in your activity or fragment and release non-essential camera resources. Also, consider reducing the resolution of preview or capture if memory is constrained.