Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
bjesus authored Sep 2, 2024
1 parent 18e1669 commit ad470b5
Showing 1 changed file with 26 additions and 3 deletions.
29 changes: 26 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,13 +21,36 @@ curl https://news.ycombinator.com/
span > a
.sitebit a
```
2. Run `go run github.com/bjesus/pipet/cmd/pipet@latest myfile.pipet`
2. Run `go run github.com/bjesus/pipet/cmd/pipet@latest hackernews.pipet`
3. See all of the latest hacker news in your terminal!

<details><summary>Get as JSON</summary>Add the `--json` flag to make Pipet, like `go run github.com/bjesus/pipet/cmd/pipet@latest --json myfile.pipet` or `pipet --json myfile.file`</details>
<details><summary>Render to a template</summary>Peek a boo!</details>
<details><summary>Use pipes</summary>Peek a boo!</details>
<details><summary>Monitor for changes</summary>Peek a boo!</details>
<details><summary>Use pipes</summary>

You can use unix pipes after your queries as if it was a normal command line. For example, count the charaters in the title and extract the full URL using [htmlq](https://github.com/mgdm/htmlq):

```
curl https://news.ycombinator.com/
.title .titleline
span > a
span > a | wc -c
.sitebit a
.sitebit a | htmlq --attribute href a
```
</details>
<details><summary>Monitor for changes</summary>

Set an interval and a command to run on change, and have Pipet notify you when something happened. For example, get a notification whenever a the Hacker News story is different:

```
curl https://news.ycombinator.com/
.title .titleline a
```

And run it with `pipet --interval 60 --on-change "notify-send {}" hackernews.pipet`

</details>

# Pipet files
Pipet files describe where and how to get the data you are interested in. They are normal text files containing one or more blocks, separated with an empty line. Line beginning with `//` are ignored and can be used for comments. Every block has at least 2 sections - the first line containing the URL and the tool we are using for scraping, and the following lines describing the selectors reaching the data we would like scrap. Some blocks can end with a special last line pointing to the "next page" selector - more on that later.
Expand Down

0 comments on commit ad470b5

Please sign in to comment.