#amazon-s3 #s3 #find #regex #pattern #aws #pattern-match

bin+lib s3find

一个用于遍历 Amazon S3 层级的命令行工具。s3find 是 Amazon S3 的 find 命令的类似工具。

18 个版本

0.7.2 2020 年 8 月 4 日
0.7.1 2020 年 6 月 7 日
0.7.0 2020 年 5 月 30 日
0.6.0 2020 年 3 月 24 日
0.3.0 2018 年 11 月 11 日

命令行工具 中排名 2550

每月下载量 30

BSD-2-Clause 协议

87KB
2K SLoC

s3find

build status freebsd build status codecov crates.io docker image

一个用于遍历 Amazon S3 层级的命令行工具。是 Amazon S3 的 find 命令的类似工具。

发行版

发布页面发行版

Github 发布页面提供了

  • Windows
  • Linux
  • macOS

Docker

Docker Hub 上的 Docker 镜像

  • develop: anderender/s3find:latest
  • release: anderender/s3find:<version>

用法

USAGE:
    s3find [FLAGS] [OPTIONS] <path> [SUBCOMMAND]

FLAGS:
    -h, --help
            Prints help information

        --summarize
            Print summary statistic

    -V, --version
            Prints version information


OPTIONS:
        --aws-access-key <aws-access-key>
            AWS access key. Unrequired.

        --aws-region <aws-region>
            The region to use. Default value is us-east-1 [default: us-east-1]

        --aws-secret-key <aws-secret-key>
            AWS secret key. Unrequired

        --size <bytes-size>...
            File size for match:
                5k - exact match 5k,
                +5k - bigger than 5k,
                -5k - smaller than 5k,

            Possible file size units are as follows:
                k - kilobytes (1024 bytes)
                M - megabytes (1024 kilobytes)
                G - gigabytes (1024 megabytes)
                T - terabytes (1024 gigabytes)
                P - petabytes (1024 terabytes)
        --iname <ipatern>...
            Case-insensitive glob pattern for match, can be multiple

        --limit <limit>
            Limit result

        --name <npatern>...
            Glob pattern for match, can be multiple

        --page-size <number>
            The number of results to return in each response to a
            list operation. The default value is 1000 (the maximum
            allowed). Using a lower value may help if an operation
            times out. [default: 1000]
        --regex <rpatern>...
            Regex pattern for match, can be multiple

        --mtime <time>...
            Modification time for match, a time period:
                -5d - for period from now-5d to now
                +5d - for period before now-5d

            Possible time units are as follows:
                s - seconds
                m - minutes
                h - hours
                d - days
                w - weeks

            Can be multiple, but should be overlaping

ARGS:
    <path>
            S3 path to walk through. It should be s3://bucket/path


SUBCOMMANDS:
    copy        Copy matched keys to a s3 destination
    delete      Delete matched keys
    download    Download matched keys
    exec        Exec any shell program with every key
    help        Prints this message or the help of the given subcommand(s)
    ls          Print the list of matched keys
    lstags      Print the list of matched keys with tags
    move        Move matched keys to a s3 destination
    nothing     Do not do anything with keys, do not print them as well
    print       Extended print with detail information
    public      Make the matched keys public available (readonly)
    tags        Set the tags(overwrite) for the matched keys


The authorization flow is the following chain:
  * use credentials from arguments provided by users
  * use environment variable credentials: AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
  * use credentials via aws file profile.
    Profile can be set via environment variable AWS_PROFILE
    Profile file can be set via environment variable AWS_SHARED_CREDENTIALS_FILE
  * use AWS instance IAM profile
  * use AWS container IAM profile

示例

通过 glob 模式查找路径

打印

s3find 's3://example-bucket/example-path' --name '*' print

删除

s3find 's3://example-bucket/example-path' --name '*' delete

列出

s3find 's3://example-bucket/example-path' --name '*' ls

列出带有标签的键

s3find 's3://example-bucket/example-path' --name '*' lstags

执行

s3find 's3://example-bucket/example-path' --name '*' exec 'echo {}'

下载

s3find 's3://example-bucket/example-path' --name '*' download

将文件复制到另一个 s3 位置

s3find 's3://example-bucket/example-path' --name '*.dat' copy -f 's3://example-bucket/example-path2'

将文件移动到另一个 s3 位置

s3find 's3://example-bucket/example-path' --name '*.dat' move -f 's3://example-bucket/example-path2'

设置标签

s3find 's3://example-bucket/example-path' --name '*9*' tags 'key:value' 'env:staging'

使公共可用

s3find 's3://example-bucket/example-path' --name '*9*' public

通过不区分大小写的 glob 模式查找路径

s3find 's3://example-bucket/example-path' --iname '*s*' ls

通过正则表达式模式查找路径

s3find 's3://example-bucket/example-path' --regex '1$' print

通过大小查找路径

精确匹配

s3find 's3://example-bucket/example-path' --size 0 print

更大

s3find 's3://example-bucket/example-path' --size +10M print

更小

s3find 's3://example-bucket/example-path' --size -10k print

通过时间查找路径

在最后 10 秒之前修改的文件

s3find 's3://example-bucket/example-path' --mtime 10 print

在最后 10 分钟之前修改的文件

s3find 's3://example-bucket/example-path' --mtime +10m print

在最后 10 小时之前修改的文件

s3find 's3://example-bucket/example-path' --mtime -10h print

多个过滤器

相同的过滤器

大小在 10 到 20 字节之间的文件

s3find 's3://example-bucket/example-path' --size +10 --size -20 print

不同的过滤器

s3find 's3://example-bucket/example-path' --size +10 --name '*file*' print

额外控制

选择有限的键数量

s3find 's3://example-bucket/example-path' --name '*' --limit 10

限制请求的页面大小

s3find 's3://example-bucket/example-path' --name '*' --page-size 100

如何构建和安装

要求:rust 和 cargo

# Build
cargo build --release

# Install from local source
cargo install

# Install latest from git
cargo install --git https://github.com/AnderEnder/s3find-rs

# Install from crate package
cargo install s3find

依赖项

~25–38MB
~655K SLoC