Skip to content

Commit

Permalink
Merge branch 'dev'
Browse files Browse the repository at this point in the history
  • Loading branch information
ehsandeep committed Sep 16, 2023
2 parents 7a7a295 + 45cd961 commit 7c95c8f
Show file tree
Hide file tree
Showing 26 changed files with 1,977 additions and 192 deletions.
9 changes: 8 additions & 1 deletion .github/workflows/dockerhub-push.yml
Original file line number Diff line number Diff line change
Expand Up @@ -37,4 +37,11 @@ jobs:
context: .
platforms: linux/amd64,linux/arm64
push: true
tags: projectdiscovery/nuclei:latest,projectdiscovery/nuclei:${{ steps.meta.outputs.TAG }}
tags: projectdiscovery/nuclei:latest,projectdiscovery/nuclei:${{ steps.meta.outputs.TAG }}

- name: Update DockerHub Description
uses: peter-evans/dockerhub-description@v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_TOKEN }}
repository: projectdiscovery/nuclei
36 changes: 36 additions & 0 deletions .github/workflows/performance-test.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
name: 🔨 Performance Test

on:
workflow_dispatch:
schedule:
# Weekly
- cron: '0 0 * * 0'

jobs:
build:
name: Test Performance
strategy:
matrix:
go-version: [1.20.x]
os: [ubuntu-latest, macOS-latest]

runs-on: ${{ matrix.os }}
steps:
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: ${{ matrix.go-version }}

- name: Check out code
uses: actions/checkout@v3

- name: Go Mod hygine
run: |
go clean -modcache
go mod tidy
working-directory: v2/

# Max GH exection time 6H => timeout after that
- name: Running performance with big list
run: go run -race . -l ../functional-test/targets-150.txt
working-directory: v2/cmd/nuclei/
29 changes: 29 additions & 0 deletions SYNTAX-REFERENCE.md
Original file line number Diff line number Diff line change
Expand Up @@ -2450,6 +2450,35 @@ Inputs contains inputs for the network socket

<div class="dd">

<code>port</code> <i>string</i>

</div>
<div class="dt">

description: |
Port is the port to send network requests to. this acts as default port but is overriden if target/input contains
non-http(s) ports like 80,8080,8081 etc
</div>

<hr />

<div class="dd">

<code>exclude-ports</code> <i>string</i>

</div>
<div class="dt">

description: |
ExcludePorts is the list of ports to exclude from being scanned . It is intended to be used with `Port` field and contains a list of ports which are ignored/skipped
</div>

<hr />

<div class="dd">

<code>read-size</code> <i>int</i>

</div>
Expand Down
2 changes: 2 additions & 0 deletions docs/getting-started/install.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,8 @@ title: 'Install'
```bash
docker pull projectdiscovery/nuclei:latest
```

Docker-specific usage instructions can be found [here](./running#running-with-docker).
</Tab>
<Tab title="Github">
```bash
Expand Down
74 changes: 50 additions & 24 deletions docs/getting-started/running.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -7,29 +7,29 @@ title: 'Running Nuclei'

## Running **Nuclei**

Nuclei templates can be primarily executed in two ways,
Nuclei templates can be primarily executed in two ways:

1. **Templates** (`-t/templates`)

As default, all the templates (except nuclei-ignore list) gets executed from default template installation path.
As default, all the templates (except nuclei-ignore list) get executed from the default template installation path.

```sh
nuclei -u https://example.com
```

Custom template directory or multiple template directory can be executed as follows,
Custom template directory or multiple template directory can be executed as follows:

```sh
nuclei -u https://example.com -t cves/ -t exposures/
```

Custom template Github repos are downloaded under `github` directory. Custom repo templates can be passed as follows
Custom template Github repos are downloaded under `github` directory. Custom repo templates can be passed as follows:

```sh
nuclei -u https://example.com -t github/private-repo
```

Similarly, Templates can be executed against list of URLs.
Similarly, Templates can be executed against a list of URLs.

```sh
nuclei -list http_urls.txt
Expand All @@ -41,7 +41,7 @@ nuclei -list http_urls.txt
nuclei -u https://example.com -w workflows/
```

Similarly, Workflows can be executed against list of URLs.
Similarly, Workflows can be executed against a list of URLs.

```sh
nuclei -list http_urls.txt -w workflows/wordpress-workflow.yaml
Expand Down Expand Up @@ -77,7 +77,9 @@ And this example will run all the templates available under `~/nuclei-templates/
nuclei -u https://example.com -tags config -t exposures/
```

Multiple filters works together with AND condition, below example runs all template with `cve` tags AND has `critical` OR `high` severity AND `geeknik` as author of template.
Multiple filters works together with AND condition,
below example runs all templates with `cve` tags
AND has `critical` OR `high` severity AND `geeknik` as author of template.

```sh
nuclei -u https://example.com -tags cve -severity critical,high -author geeknik
Expand Down Expand Up @@ -155,19 +157,21 @@ To use this feature, users need to set the following environment variables:
<Accordion title="For AWS Bucket" icon="pencil">

```bash
export AWS_ACCESS_KEY=AKIAXXXXXXXX export AWS_SECRET_KEY=XXXXXX export
AWS_REGION=us-xxx-1 export AWS_TEMPLATE_BUCKET=aws_bucket_name
export AWS_ACCESS_KEY=AKIAXXXXXXXX
export AWS_SECRET_KEY=XXXXXX
export AWS_REGION=us-xxx-1
export AWS_TEMPLATE_BUCKET=aws_bucket_name
```

</Accordion>
<Accordion title="For Azure Blob Storage" icon="pencil">

```bash
export AZURE_TENANT_ID=00000000-0000-0000-0000-000000000000 export
AZURE_CLIENT_ID=00000000-0000-0000-0000-000000000000 export
AZURE_CLIENT_SECRET=00000000-0000-0000-0000-000000000000 export
AZURE_SERVICE_URL=https://XXXXXXXXXX.blob.core.windows.net/ export
AZURE_CONTAINER_NAME=templates
export AZURE_TENANT_ID=00000000-0000-0000-0000-000000000000
export AZURE_CLIENT_ID=00000000-0000-0000-0000-000000000000
export AZURE_CLIENT_SECRET=00000000-0000-0000-0000-000000000000
export AZURE_SERVICE_URL=https://XXXXXXXXXX.blob.core.windows.net/
export AZURE_CONTAINER_NAME=templates
```

</Accordion>
Expand Down Expand Up @@ -406,7 +410,7 @@ Feel free to play with these flags to tune your nuclei scan speed and accuracy.
Many BugBounty platform/programs requires you to identify the HTTP traffic you make, this can be achieved by setting custom header using config file at `$HOME/.config/nuclei/config.yaml` or CLI flag `-H / header`

<Note>
Setting custom header using config file88
Setting custom header using config file

```yaml
# Headers to include with each request.
Expand Down Expand Up @@ -498,7 +502,7 @@ nuclei -l urls.txt -include-tags iot,misc,fuzz
### Scan on internet database
Nuclei supports integration with [uncover module](https://github.com/projectdiscovery/uncover)that supports services like Shodan, Censys, Hunter, Zoomeye, many more to execute Nuclei on these databases.
Nuclei supports integration with [uncover module](https://github.com/projectdiscovery/uncover) that supports services like Shodan, Censys, Hunter, Zoomeye, many more to execute Nuclei on these databases.
Here are uncover options to use -
Expand Down Expand Up @@ -584,8 +588,8 @@ For enterprises dealing with large-scale scanning, optimizing Nuclei can be a bu

User should select **Scan Strategy** based on number of targets and Each strategy has its own pros & cons.

- When targets < 1000 . `template-spray` should be used . this strategy is slightly faster than `host-spray` but uses more RAM and doesnot optimally reuse connections.
- When targets > 1000 . `host-spray` should be used . this strategy uses less RAM than `template-spray` and reuses HTTP connections along with some minor improvements and these are crucial when mass scanning.
- When targets < 1000, `template-spray` should be used. This strategy is slightly faster than `host-spray` but uses more RAM and does not optimally reuse connections.
- When targets > 1000, `host-spray` should be used. This strategy uses less RAM than `template-spray` and reuses HTTP connections along with some minor improvements and these are crucial when mass scanning.

### Concurrency & Bulk-Size

Expand All @@ -607,7 +611,7 @@ This option should only be enabled if targets > 10k . This skips any type of sor

## Nuclei **Config**

> Since release of [v.2.3.2](https://blog.projectdiscovery.io/nuclei-v2-3-0-release/) nuclei uses [goflags](https://github.com/projectdiscovery/goflags) for clean CLI experience and long/short formatted flags.
> Since release of [v2.3.2](https://blog.projectdiscovery.io/nuclei-v2-3-0-release/) nuclei uses [goflags](https://github.com/projectdiscovery/goflags) for clean CLI experience and long/short formatted flags.
>
> [goflags](https://github.com/projectdiscovery/goflags) comes with auto-generated config file support that coverts all available CLI flags into config file, basically you can define all CLI flags into config file to avoid repetitive CLI flags that loads as default for every scan of nuclei.
>
Expand Down Expand Up @@ -810,18 +814,18 @@ Nuclei supports SARIF export of valid findings with `-se, -sarif-export` flag. T
nuclei -l urls.txt -t cves/ -sarif-export report.sarif
```

It is also possible to visualize Nuclei results using **sarif** file.
It is also possible to visualize Nuclei results using **SARIF** files.

1. By Uploading SARIF File to [SARIF Viewer](https://microsoft.github.io/sarif-web-component/)
1. By uploading a SARIF file to [SARIF Viewer](https://microsoft.github.io/sarif-web-component/)

2. By Uploading SARIF File to Github Actions
2. By uploading a SARIF file to []Github Actions](https://docs.github.com/en/code-security/code-scanning/integrating-with-code-scanning/uploading-a-sarif-file-to-github)

more info [here](https://github.com/projectdiscovery/nuclei/pull/2925).
More info on the SARIF output is documented [here](https://github.com/projectdiscovery/nuclei/pull/2925).

<Note>
These are **not official** viewers of Nuclei and `Nuclei` has no liability
towards any of these options to visualize **Nuclei** results. These are just
some publicly available options to visualize SARIF File.
some publicly available options to visualize SARIF files.
</Note>

## Scan **Metrics**
Expand Down Expand Up @@ -859,3 +863,25 @@ nuclei -passive -target http_data
```

<Note>Passive mode support is limited for templates having `{{BasedURL}}` or `{{BasedURL/}}` as base path.</Note>

## Running With Docker
If Nuclei was installed within a Docker container based on the [installation instructions](./install),
the executable does not have the context of the host machine. This means that the executable will not be able to access
local files such as those used for input lists or templates. To resolve this, the container should be run with volumes
mapped to the local filesystem to allow access to these files.

### Basic Usage
This example runs a Nuclei container against `google.com`, prints the results to JSON and removes the container once it
has completed:
```sh
docker run --rm projectdiscovery/nuclei -u google.com -jsonl
```

### Using Volumes
This example runs a Nuclei container against a list of URLs, writes the results to a `.jsonl` file and removes the
container once it has completed.
```sh
# This assumes there's a file called `urls.txt` in the current directory
docker run --rm -v ./:/app/ projectdiscovery/nuclei -l /app/urls.txt -jsonl /app/results.jsonl
# The results will be written to `./results.jsonl` on the host machine once the container has completed
```
46 changes: 45 additions & 1 deletion docs/template-guide/network.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -86,6 +86,49 @@ host:

If a port is specified in the host, the user supplied port is ignored and the template port takes precedence.

### Port

Starting from Nuclei v2.9.15, a new field called `port` has been introduced in network templates. This field allows users to specify the port separately instead of including it in the host field.

Previously, if you wanted to write a network template for an exploit targeting SSH, you would have to specify both the hostname and the port in the host field, like this:
```yaml
host:
- "{{Hostname}}"
- "{{Host}}:22"
```

In the above example, two network requests are sent: one to the port specified in the input/target, and another to the default SSH port (22).

The reason behind introducing the port field is to provide users with more flexibility when running network templates on both default and non-default ports. For example, if a user knows that the SSH service is running on a non-default port of 2222 (after performing a port scan with service discovery), they can simply run:

```bash
$ nuclei -u scanme.sh:2222 -id xyz-ssh-exploit
```

In this case, Nuclei will use port 2222 instead of the default port 22. If the user doesn't specify any port in the input, port 22 will be used by default. However, this approach may not be straightforward to understand and can generate warnings in logs since one request is expected to fail.

Another issue with the previous design of writing network templates is that requests can be sent to unexpected ports. For example, if a web service is running on port 8443 and the user runs:

```bash
$ nuclei -u scanme.sh:8443
```

In this case, `xyz-ssh-exploit` template will send one request to `scanme.sh:22` and another request to `scanme.sh:8443`, which may return unexpected responses and eventually result in errors. This is particularly problematic in automation scenarios.

To address these issues while maintaining the existing functionality, network templates can now be written in the following way:

```yaml
host:
- "{{Hostname}}"
port: 22
```
In this new design, the functionality to run templates on non-standard ports will still exist, except for the default reserved ports (`80`, `443`, `8080`, `8443`, `8081`, `53`). Additionally, the list of default reserved ports can be customized by adding a new field called exclude-ports:

```yaml
exclude-ports: 80,443
```
When `exclude-ports` is used, the default reserved ports list will be overwritten. This means that if you want to run a network template on port `80`, you will have to explicitly specify it in the port field.

#### Matchers / Extractor Parts

Valid `part` values supported by **Network** protocol for Matchers / Extractor are -
Expand All @@ -105,7 +148,7 @@ id: input-expressions-mongodb-detect
info:
name: Input Expression MongoDB Detection
author: pd-team
author: pdteam
severity: info
reference: https://github.com/orleven/Tentacle
Expand All @@ -114,6 +157,7 @@ tcp:
- data: "{{hex_decode('3a000000a741000000000000d40700000000000061646d696e2e24636d640000000000ffffffff130000001069736d6173746572000100000000')}}"
host:
- "{{Hostname}}"
port: 27017
read-size: 2048
matchers:
- type: word
Expand Down
21 changes: 21 additions & 0 deletions integration_tests/network/network-port.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
id: network-port-example

info:
name: Example Template with Network Port
author: pdteam
severity: high
description: This is an updated description for the network port example.
reference: https://updated-reference-link

tcp:
- host:
- "{{Hostname}}"
port: 23846
inputs:
- data: "PING\r\n"
read-size: 4
matchers:
- type: word
part: data
words:
- "PONG"
10 changes: 10 additions & 0 deletions nuclei-jsonschema.json
Original file line number Diff line number Diff line change
Expand Up @@ -1134,6 +1134,16 @@
"title": "inputs for the network request",
"description": "Inputs contains any input/output for the current request"
},
"port": {
"type": "string",
"title": "port to send requests to",
"description": "Port to send network requests to"
},
"exclude-ports": {
"type": "string",
"title": "exclude ports from being scanned",
"description": "Exclude ports from being scanned"
},
"read-size": {
"type": "integer",
"title": "size of network response to read",
Expand Down
Loading

0 comments on commit 7c95c8f

Please sign in to comment.