22个版本
0.3.10 | 2023年9月2日 |
---|---|
0.3.9 | 2023年8月22日 |
0.3.8 | 2023年6月17日 |
0.3.7 | 2023年4月24日 |
0.1.6 | 2023年1月31日 |
#399 在 开发工具
每月 25 次下载
585KB
219 行
gptee
使用标准输入作为提示的语模型输出
现在支持GPT3.5聊天完成!
安装
- 使用
cargo
在本地上安装此工具(推荐)。
cargo install --locked gptee
用法
gptee
被设计用于在shell脚本和其他程序中使用,也可在交互式shell中使用。
简单示例
echo Tell me a joke | gptee
Why did the chicken cross the road?
To get to the other side!
像在脚本中一样编写shell命令
echo Tell me a joke | gptee | say
您可以在脚本中编写命令并执行它们。 在运行任意shell脚本之前请谨慎操作
echo Give me just a macOS zsh command to get the free space on my hard drive \
| gptee -s "Prefix each line of output with a pound sign if it not meant to be executed" \
# pipe this to `sh` to have it execute
尝试使用自定义模型。默认情况下,gptee
使用gpt-3.5-turbo
echo Tell me a joke | gptee -m text-davinci-003
使用聊天完成模型(如gpt-3.5-turbo
),您可以使用-s
或--system-message
注入系统消息。对于davinci和其他非聊天模型,输出前缀到提示。
echo "Tell me I'm pretty" | gptee -s "You only speak French"
有关更多功能,请参阅--help
/ -h
标志。
遇到任何错误吗?
如果您遇到任何错误或对改进有建议,请在仓库中打开一个问题。
许可证
此项目使用MIT许可证。
帮助输出
output from a language model using standard input as the prompt
Usage: gptee [OPTIONS] [FILE]...
Arguments:
[FILE]... File(s) to print / concatenate. Use a dash ('-') or no argument at all to read from standard input
Options:
-m, --model <MODEL>
ID of the model to use. You can use the [List models](https://beta.openai.com/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](https://beta.openai.com/docs/models/overview) for descriptions of them [default: gpt-3.5-turbo]
-l, --max-tokens <MAX_TOKENS>
The maximum number of [tokens](/tokenizer) to generate in the completion
--stop <STOP>
Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence
-t, --temperature <TEMPERATURE>
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic [default: 0.7]
--top-p <TOP_P>
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered [default: 1]
-s, --system-message <SYSTEM_MESSAGE>
For chat completions, you can specify a system message to be sent to the model. This message will be sent to the model before the user's message. This is useful for providing context to the model, or for providing a prompt to the model. See https://platform.openai.com/docs/guides/chat for more details
-h, --help
Print help (see more with '--help')
-V, --version
Print version
依赖项
~24–37MB
~437K SLoC