3 个版本 (破坏性更新)

0.3.0 2023年5月7日
0.2.0 2021年1月7日
0.1.0 2020年4月30日

#7#v4l

Download history 74/week @ 2024-03-11 59/week @ 2024-03-18 57/week @ 2024-03-25 134/week @ 2024-04-01 38/week @ 2024-04-08 56/week @ 2024-04-15 61/week @ 2024-04-22 47/week @ 2024-04-29 37/week @ 2024-05-06 50/week @ 2024-05-13 37/week @ 2024-05-20 82/week @ 2024-05-27 49/week @ 2024-06-03 36/week @ 2024-06-10 34/week @ 2024-06-17 30/week @ 2024-06-24

每月156 次下载
v4l 中使用

MIT 许可证

2KB

安全的video4linux (v4l)绑定

crates.io license Build Status

该软件包提供了对Video for Linux (V4L)堆栈的安全绑定。现代设备驱动程序通常实现v4l2 API,而较旧的驱动程序可能依赖于传统的v4l API。可以使用此软件包通过选择此软件包的libv4l功能来使用这些传统设备。

目标

本软件包应提供v4l-sys软件包,以便能够完全(但不安全)访问libv4l*。在此基础上,将提供更高级、更符合惯用语的API来在Linux中使用视频捕获设备。

将提供简单的实用程序应用程序来列出设备和捕获帧。计划未来提供用于显示帧的最小化OpenGL/Vulkan查看器。

变更日志

请参阅 CHANGELOG.md

依赖项

您可以选择两种依赖项(均由该软件包内部提供)

  • libv4l-sys

    链接到包括libv4l1、libv4l2、libv4lconvert在内的libv4l*堆栈。这具有通过libv4lconvert等在用户空间中模拟常用捕获格式(如RGB3)的优点。然而,libv4l不支持某些功能,如用户ptr缓冲区。

  • v4l2-sys

    仅使用由videodev2.h提供的Linux内核提供的v4l2 API。您将获得所有v4l2功能的支持,如用户ptr缓冲区,但可能需要自己进行格式转换,例如需要RGB/BGR缓冲区,而此类缓冲区可能不支持某些通用设备(如网络摄像头)。

通过将libv4l或v4l2作为功能选择为该软件包的后端来启用它们。

用法

以下是一个快速使用此软件包的示例。它介绍了从流设备(例如网络摄像头)捕获帧所必需的基本知识。

use v4l::buffer::Type;
use v4l::io::mmap::Stream;
use v4l::io::traits::CaptureStream;
use v4l::video::Capture;
use v4l::Device;
use v4l::FourCC;

fn main() {
    // Create a new capture device with a few extra parameters
    let mut dev = Device::new(0).expect("Failed to open device");

    // Let's say we want to explicitly request another format
    let mut fmt = dev.format().expect("Failed to read format");
    fmt.width = 1280;
    fmt.height = 720;
    fmt.fourcc = FourCC::new(b"YUYV");
    let fmt = dev.set_format(&fmt).expect("Failed to write format");

    // The actual format chosen by the device driver may differ from what we
    // requested! Print it out to get an idea of what is actually used now.
    println!("Format in use:\n{}", fmt);

    // Now we'd like to capture some frames!
    // First, we need to create a stream to read buffers from. We choose a
    // mapped buffer stream, which uses mmap to directly access the device
    // frame buffer. No buffers are copied nor allocated, so this is actually
    // a zero-copy operation.

    // To achieve the best possible performance, you may want to use a
    // UserBufferStream instance, but this is not supported on all devices,
    // so we stick to the mapped case for this example.
    // Please refer to the rustdoc docs for a more detailed explanation about
    // buffer transfers.

    // Create the stream, which will internally 'allocate' (as in map) the
    // number of requested buffers for us.
    let mut stream = Stream::with_buffers(&mut dev, Type::VideoCapture, 4)
        .expect("Failed to create buffer stream");

    // At this point, the stream is ready and all buffers are setup.
    // We can now read frames (represented as buffers) by iterating through
    // the stream. Once an error condition occurs, the iterator will return
    // None.
    loop {
        let (buf, meta) = stream.next().unwrap();
        println!(
            "Buffer size: {}, seq: {}, timestamp: {}",
            buf.len(),
            meta.sequence,
            meta.timestamp
        );

        // To process the captured data, you can pass it somewhere else.
        // If you want to modify the data or extend its lifetime, you have to
        // copy it. This is a best-effort tradeoff solution that allows for
        // zero-copy readers while enforcing a full clone of the data for
        // writers.
    }
}

请查看提供的 examples 以获取更多示例应用程序。

依赖项

~0–1.8MB
~36K SLoC