8个版本
0.3.10 | 2023年8月4日 |
---|---|
0.3.9 | 2023年7月29日 |
0.3.7 | 2023年3月16日 |
0.3.6 | 2022年11月18日 |
0.3.2 | 2021年8月27日 |
#220 in 文本处理
42 每月下载量
240KB
2K SLoC
Reason:研究论文的Shell
- 我是否读过这篇论文?
- 我读过OSDI 2021的哪些论文?
- 哪些论文标题中包含“Distributed”?
- 2020年有多少篇论文是由Mosharaf Chowdhury教授合著的?
好吧,问问 reason
。
工作原理
过滤并列出论文
$ reason
>> # Show all papers.
>> ls
+----------------------------------------------------------+----------------+---------+------+
| title | first author | venue | year |
+============================================================================================+
| Shadowtutor: Distributed Partial Distillation for Mobile | Jae-Won Chung | ICPP | 2020 |
| Video DNN Inference | | | |
|----------------------------------------------------------+----------------+---------+------|
| Efficient Memory Disaggregation with Infiniswap | Juncheng Gu | NSDI | 2017 |
|----------------------------------------------------------+----------------+---------+------|
| Refurbish Your Training Data: Reusing Partially | Gyewon Lee | ATC | 2021 |
| Augmented Samples for Faster Deep Neural Network | | | |
| Training | | | |
|----------------------------------------------------------+----------------+---------+------|
| Finding Consensus Bugs in Etherium via Multi-transaction | Youngseok Yang | OSDI | 2021 |
| Differential Fuzzing | | | |
|----------------------------------------------------------+----------------+---------+------|
| Tiresias: A GPU Cluster Manager for Distributed Deep | Juncheng Gu | NSDI | 2019 |
| Learning | | | |
|----------------------------------------------------------+----------------+---------+------|
| Nimble: Lightweight and Parallel GPU Task Scheduling for | Woosuk Kwon | NeurIPS | 2020 |
| Deep Learning | | | |
+----------------------------------------------------------+----------------+---------+------+
>> # Filter by 'title'. All these are regexes!
>> ls 'Deep Learning$'
+------------------------------------------------------------+--------------+---------+------+
| title | first author | venue | year |
+============================================================================================+
| Tiresias: A GPU Cluster Manager for Distributed Deep | Juncheng Gu | NSDI | 2019 |
| Learning | | | |
|------------------------------------------------------------+--------------+---------+------|
| Nimble: Lightweight and Parallel GPU Task Scheduling for | Woosuk Kwon | NeurIPS | 2020 |
| Deep Learning | | | |
+------------------------------------------------------------+--------------+---------+------+
>> # You may set default filters with `cd`.
>> # BTW, `cd .`, `cd ..`, `cd -`, and `cd` are supported, too.
>> cd 'Deep Learning$'
>> pwd
title matches 'Deep Learning$'
>> # Default filter are automatically applied.
>> # Infiniswap (NSDI'17) is not shown, because its title doesn't match 'Deep Learning$'.
>> ls at NSDI
+------------------------------------------------------------+--------------+---------+------+
| title | first author | venue | year |
+============================================================================================+
| Tiresias: A GPU Cluster Manager for Distributed Deep | Juncheng Gu | NSDI | 2019 |
| Learning | | | |
+------------------------------------------------------------+--------------+---------+------+
>> # Delete Tiresias.
>> ls at NSDI | rm
Removed 1 paper.
导入新论文
>> # Import directly from arXiv and USENIX. This will also download paper PDFs.
>> curl https://arxiv.org/abs/2105.11367
+--------------------------------------------------------+--------------+-------+------+
| title | first author | venue | year |
+======================================================================================+
| FedScale: Benchmarking Model and System Performance of | Fan Lai | arXiv | 2021 |
| Federated Learning | | | |
+--------------------------------------------------------+--------------+-------+------+
>> curl https://www.usenix.org/conference/nsdi21/presentation/you
+------------------------------------------+--------------+-------+------+
| title | first author | venue | year |
+========================================================================+
| Ship Compute or Ship Data? Why Not Both? | Jie You | NSDI | 2021 |
+------------------------------------------+--------------+-------+------+
>> # Modify paper metadata.
>> ls ship | set as Kayak
>> # Or, import manually.
>> touch 'Batch Normalization: Accelerating Deep Network Training by Reducing Internal
Covariate Shift' by 'Sergey Ioffe, Christian Szegedy' at ICML in 2015 as BN @ BatchNorm.pdf
+--------------------------------------------------------------+--------------+-------+------+
| title | first author | venue | year |
+============================================================================================+
| Batch Normalization: Accelerating Deep Network Training by | Sergey Ioffe | ICML | 2015 |
| Reducing Internal Covariate Shift | | | |
+--------------------------------------------------------------+--------------+-------+------+
阅读,做笔记,创建书籍!
>> # Open with a PDF viewer (`open`) and edit markdown notes with your editor (`ed`).
>> ls 'Why Not Both' | open | ed
+------------------------------------------+--------------+-------+------+
| title | first author | venue | year |
+========================================================================+
| Ship Compute or Ship Data? Why Not Both? | Jie You | NSDI | 2021 |
+------------------------------------------+--------------+-------+------+
>> # Format your markdown notes into an HTML book, and open it in your browser.
>> ls 'Deep Learning' | printf
+------------------------------------------------------------+--------------+---------+------+
| title | first author | venue | year |
+============================================================================================+
| Tiresias: A GPU Cluster Manager for Distributed Deep | Juncheng Gu | NSDI | 2019 |
| Learning | | | |
|------------------------------------------------------------+--------------+---------+------|
| Nimble: Lightweight and Parallel GPU Task Scheduling for | Woosuk Kwon | NeurIPS | 2020 |
| Deep Learning | | | |
+------------------------------------------------------------+--------------+---------+------+
printf
printf
会根据您的笔记创建一个HTML书籍。
命令
调用 reason
将启动一个新的命令提示符。它接受类似Unix的命令,但这些命令将在您的论文库中的研究论文上工作。
当前工作
ls
以表格形式过滤并打印论文。默认列是标题、第一作者(by1)、会议(at) 和年份(in)。cd
向默认过滤器集添加一个AND过滤器(启动时默认为空)。pwd
显示由cd
设置的当前默认过滤器集。touch
在您的论文库中创建一个新条目。curl
从网络上导入论文,例如arXiv或usenix.org。如果可用,它还会下载论文PDF。实验上支持下载原始PDF URL并推断元数据字段。rm
从您的论文库中删除条目。set
设置论文属性,包括自定义标签,这些标签还可以用于在ls
中为论文着色。printf
使用mdbook
创建您笔记的HTML页面。open
使用您的PDF查看器打开论文(可配置,默认为zathura)。ed
打开您的编辑器(可配置,默认为vim),您可以在其中编辑您的笔记。wc
计算论文数量。man
加上一个命令将打印该命令的文档。exit
或 Ctrl-d 退出reason
。
尚未,但希望很快(非常欢迎贡献!)
grep
返回包含您指定的查询字符串的笔记的论文列表。sort
根据指定的列对论文进行排序。stat
打印论文的元数据和笔记。top
打印论文库的摘要。
安装
您可以从发行版中获取二进制文件,或者运行 cargo install reason-shell
.
跨平台支持
reason
目前支持 Linux 和 MacOS。由于所有者目前没有 Windows 机器,因此不包括 Windows。
为了在多个平台之间共享数据,建议用户将 reason
元数据、PDF 文件和 markdown 笔记放置在由 Google Drive 等云存储服务同步的位置。我在 MacOS 上使用官方的 Google Drive 应用程序,在 Linux 上使用 Insync。这提供了额外的便利 - 您还可以使用与云存储同步的 iPad 阅读PDF。
文档
如果您已经有了 reason,运行 man man
来查看顶级文档。
如果您只是探索是否使用 reason,请查看 man
目录。
配置
配置文件保存在 ~/.config/reason/config.toml
。如果不存在,reason
将生成一个包含默认设置的配置文件。
有关更多信息,请打开 reason
并运行 man config
,或阅读 man/config.md
。
依赖关系
~24–39MB
~660K SLoC