All Go Rust Python PHP JavaScript
Chrome Dev Summit to secure your spot in workshops, office hours and learning lounges!

How to use the grep command in Linux

Search for PATTERN in each FILE or standard input. grep is a command-line utility for searching plain-text data sets for lines that match a regular expression. Its name comes from the ed command g/re/p (globally search for a regular expression and print matching lines), which has the same effect. Search any line that contains the word in filename on Linux Search grep 'word' filename Help grep --help info grep man grep version information grep --version grep -V grep (GNU grep) 2.20 Copyright (C) 2014 Free Software Foundation, Inc. Search and Filter Files filtering ifconfig | grep ens33 -A 4 Count Number of Matches ifconfig | grep inet6 -c the entire pattern ifconfig | grep "RUNNING" -w in Gzipped Files zgrep –i error syslog.gz grep -i 'hello world' menu.h main.c eth0: flags=4163 The syntax for the grep command is as follows: Usage: grep [OPTION]... PATTERN [FILE]... Regexp selection: ##### -E, --extended-regexp PATTERN is an extended regular expression (ERE) ##### -F, --fixed-strings PATTERN is a set of newline-separated fixed strings ##### -G, --basic-regexp PATTERN is a basic regular expression (BRE) ##### -P, --perl-regexp PATTERN is a Perl regular expression ##### -e, --regexp=PATTERN use PATTERN for matching ##### -f, --file=FILE obtain PATTERN from FILE ##### -i, --ignore-case ignore case distinctions ##### -w, --word-regexp force PATTERN to match only whole words ##### -x, --line-regexp force PATTERN to match only whole lines ##### -z, --null-data a data line ends in 0 byte, not newline ########Miscellaneous: ##### -s, --no-messages suppress error messages ##### -v, --invert-match select non-matching lines ##### -V, --version display version information and exit ##### --help display this help text and exit ########Output control: ##### -m, --max-count=NUM stop after NUM matches ##### -b, --byte-offset print the byte offset with output lines ##### -n, --line-number print line number with output lines ##### --line-buffered flush output on every line ##### -H, --with-filename print the file name for each match ##### -h, --no-filename suppress the file name prefix on output ##### --label=LABEL use LABEL as the standard input file name prefix ##### -o, --only-matching show only the part of a line matching PATTERN ##### -q, --quiet, --silent suppress all normal output ##### --binary-files=TYPE assume that binary files are TYPE; TYPE is 'binary', 'text', or 'without-match' ##### -a, --text equivalent to --binary-files=text ##### -I equivalent to --binary-files=without-match ##### -d, --directories=ACTION how to handle directories; ACTION is 'read', 'recurse', or 'skip' ##### -D, --devices=ACTION how to handle devices, FIFOs and sockets; ACTION is 'read' or 'skip' ##### -r, --recursive like --directories=recurse ##### -R, --dereference-recursive likewise, but follow all symlinks ##### --include=FILE_PATTERN search only files that match FILE_PATTERN ##### --exclude=FILE_PATTERN skip files and directories matching FILE_PATTERN ##### --exclude-from=FILE skip files matching any file pattern from FILE ##### --exclude-dir=PATTERN directories that match PATTERN will be skipped. ##### -L, --files-without-match print only names of FILEs containing no match ##### -l, --files-with-matches print only names of FILEs containing matches ##### -c, --count print only a count of matching lines per FILE ##### -T, --initial-tab make tabs line up (if needed) ##### -Z, --null print 0 byte after FILE name ########Context control: ##### -B, --before-context=NUM print NUM lines of leading context ##### -A, --after-context=NUM print NUM lines of trailing context ##### -C, --context=NUM print NUM lines of output context ##### -NUM same as --context=NUM ##### --group-separator=SEP use SEP as a group separator ##### --no-group-separator use empty string as a group separator ##### --color[=WHEN],--colour[=WHEN] use markers to highlight the matching strings; WHEN is 'always', 'never', or 'auto' ##### -U, --binary do not strip CR characters at EOL (MSDOS/Windows) ##### -u, --unix-byte-offsets report offsets as if CRs were not there(MSDOS/Windows) 'egrep' means 'grep -E'. 'fgrep' means 'grep -F'. Direct invocation as either 'egrep' or 'fgrep' is deprecated.
cr59

How to use the free command in Linux

Display amount of free and used memory in the system. free displays the total amount of free and used physical and swap memory in the system, as well as the buffers and caches used by the kernel. The information is gathered by parsing /proc/meminfo. memory free Help free --help version information free --version free -V free from procps-ng 3.3.10 The free command can be executed without any option. This shows the memory and swap usage about the system. memory free Human Readable free -h in megabytes free --mega Totals free -h -t total used free shared buff/cache available Mem: 1015024 197720 92424 256 724880 614944 Swap: 4194300 186112 4008188 Display Memory Usage Information Continuously seconds | CTRL+C free -s 4 --count free -s 5 -c 10 The syntax for the free command is as follows: Usage: free [options] OPTIONS: ##### -b, --bytes show output in bytes ##### -k, --kilo show output in kilobytes ##### -m, --mega show output in megabytes ##### -g, --giga show output in gigabytes ##### --tera show output in terabytes ##### --peta show output in petabytes ##### -h, --human show human-readable output ##### --si use powers of 1000 not 1024 ##### -l, --lohi show detailed low and high memory statistics ##### -t, --total show total for RAM + swap ##### -s N, --seconds N repeat printing every N seconds ##### -c N, --count N repeat printing N times, then exit ##### -w, --wide wide output ##### --help display this help and exit ##### -V, --version output version information and exit
cr58

How to use the find command in Linux

search for files and directories based on their permissions, type, date, ownership, size, and more. find is a command-line utility that locates files based on some user-specified criteria and either prints the pathname of each matched object or, if another action is requested, performs that action on each matched object. Find Files Using Name find . -name filename Help find --help version information find --version find (GNU findutils) 4.5.11 Copyright (C) 2012 Free Software Foundation, Inc. Finding files by name is probably the most common use of the find command. To find a file by its name, use the -name option followed by the name of the file you are searching for. by name find . -name filename.md a regular file find . -type f -name filename.md -not option find . -not -name *.md Ignoring Case find . -iname filename.md ./filetwo ./fileone Find all the files whose permissions are 777. Permissions find . -type f -perm 0777 -print Without find / -type f ! -perm 777 Read Only Files find / -perm /u=r Executable Files find / -perm /a=x Empty Files find /tmp -type f -empty Find Last 50 Days Modified Files find . -mtime 50 find . -mtime +50 –mtime -100 find / -cmin -60 find / -size +50M -size -100M ./filetwo ./fileone Find and remove Files single File find . -type f -name "filename.txt" -exec rm -f {} \; Multiple File find . -type f -name "*.txt" -exec rm -f {} \; Chmod find . -type d -perm 777 -print -exec chmod 755 {} \; Find Specific Files and Delete find . -type f -name *.mp3 -size +10M -exec rm {} \; ./filetwo ./fileone The syntax for the find command is as follows: Usage: find [-H] [-L] [-P] [-Olevel] [-D help|tree|search|stat|rates|opt|exec] [path...] [expression] OPTIONS: ##### -H The -H flag will only follow symbolic links while processing the command line arguments. ##### -L The -L flag will cause the find command to follow symbolic links. ##### -P for explicitly disabling symlink following. ########Predicates: ##### -name pattern tests whether the file name matches the shell-glob pattern given. ##### -type type tests whether the file is a given type. Unix file types accepted include: b - block device (buffered); c - character device (unbuffered); d - directory; f - regular file; l - symbolic link; p - named pipe; s - socket; D - door. ##### -print always returns true; prints the name of the current file plus a newline to the stdout. ##### -print0 always returns true; prints the name of the current file plus a null character to the stdout. Not required by POSIX. ##### -exec program [argument ...]; always returns true; run a program with the fixed arguments given and the current path of the file. ##### -exec program [argument ...] {} + always returns true; run a program with the fixed arguments given and as many paths as possible (up to the maximum command-line size, like xargs). For most implementations, additional occurrences of {} mean additional copies of the name given (feature not required by POSIX). ##### -ok program [argument ...]; like -exec, but returns true or false depending on whether the program returns 0. ########Operators: ##### ( expr ) forces precedence; ##### !expr true if expr is false; ##### expr1 expr2 (or expr1 -a expr2) AND. expr2 is not evaluated if expr1 is false; ##### expr1 -o expr2 OR. expr2 is not evaluated if expr1 is true.
cr57

How to use the diff command in Linux

Compare FILES line by line. You can use the diff command to show differences between two files, or each corresponding file in two directories. diff outputs differences between files line by line in any of several formats, selectable by command line options. This set of differences is often called a “diff” or “patch”. For files that are identical, diff normally produces no output; for binary (non-text) files, diff normally reports only that they are different. Compare two files with diff command diff fileone filetwo help diff --help version information diff --version diff -v 1,2c1 < abcd < dxxxf --- > hihiih #diff -v diff (GNU diffutils) 3.3 To view differences in context mode, use the -c option. copied context diff -c fileone filetwo *** fileone 2021-07-04 17:49:47.618438022 +0800 --- filetwo 2021-07-04 17:49:31.594436803 +0800 *************** *** 1,2 **** ! abcd ! dxxxf --- 1 ---- ! hihiih To view differences in unified mode, use the -u option. unified context diff -u fileone filetwo --- fileone 2021-07-04 17:49:47.618438022 +0800 +++ filetwo 2021-07-04 17:49:31.594436803 +0800 @@ -1,2 +1 @@ -abcd -dxxxf +hihiih The syntax for the diff command is as follows: Usage: diff [OPTION]... FILES OPTIONS: ##### --normal output a normal diff (the default) ##### -q, --brief report only when files differ ##### -s, --report-identical-files report when two files are the same ##### -c, -C NUM, --context[=NUM] output NUM (default 3) lines of copied context ##### -u, -U NUM, --unified[=NUM] output NUM (default 3) lines of unified context ##### -e, --ed output an ed script ##### -n, --rcs output an RCS format diff ##### -y, --side-by-side output in two columns ##### -W, --width=NUM output at most NUM (default 130) print columns ##### --left-column output only the left column of common lines ##### --suppress-common-lines do not output common lines ##### -p, --show-c-function show which C function each change is in ##### -F, --show-function-line=RE show the most recent line matching RE ##### --label LABEL use LABEL instead of file name (can be repeated) ##### -t, --expand-tabs expand tabs to spaces in output ##### -T, --initial-tab make tabs line up by prepending a tab ##### --tabsize=NUM tab stops every NUM (default 8) print columns ##### --suppress-blank-empty suppress space or tab before empty output lines ##### -l, --paginate pass output through 'pr' to paginate it ##### -r, --recursive recursively compare any subdirectories found ##### --no-dereference don't follow symbolic links ##### -N, --new-file treat absent files as empty ##### --unidirectional-new-file treat absent first files as empty ##### --ignore-file-name-case ignore case when comparing file names ##### --no-ignore-file-name-case consider case when comparing file names ##### -x, --exclude=PAT exclude files that match PAT ##### -X, --exclude-from=FILE exclude files that match any pattern in FILE ##### -S, --starting-file=FILE start with FILE when comparing directories ##### --from-file=FILE1 compare FILE1 to all operands; FILE1 can be a directory ##### --to-file=FILE2 compare all operands to FILE2; FILE2 can be a directory ##### -i, --ignore-case ignore case differences in file contents ##### -E, --ignore-tab-expansion ignore changes due to tab expansion ##### -Z, --ignore-trailing-space ignore white space at line end ##### -b, --ignore-space-change ignore changes in the amount of white space ##### -w, --ignore-all-space ignore all white space ##### -B, --ignore-blank-lines ignore changes where lines are all blank ##### -I, --ignore-matching-lines=RE ignore changes where all lines match RE ##### -a, --text treat all files as text ##### --strip-trailing-cr strip trailing carriage return on input ##### -D, --ifdef=NAME output merged file with '#ifdef NAME' diffs ##### --GTYPE-group-format=GFMT format GTYPE input groups with GFMT ##### --line-format=LFMT format all input lines with LFMT ##### --LTYPE-line-format=LFMT format LTYPE input lines with LFMT ##### -d, --minimal try hard to find a smaller set of changes ##### --horizon-lines=NUM keep NUM lines of the common prefix and suffix ##### --speed-large-files assume large files and many scattered small changes ##### --help display this help and exit ##### -v, --version output version information and exit FILES are 'FILE1 FILE2' or 'DIR1 DIR2' or 'DIR FILE...' or 'FILE... DIR'. If --from-file or --to-file is given, there are no restrictions on FILE(s). If a FILE is '-', read standard input. Exit status is 0 if inputs are the same, 1 if different, 2 if trouble.
cr56

How to use the df command in Linux

Show information about the file system on which each FILE resides, or all file systems by default. df (abbreviation for disk free) is a standard Unix command used to display the amount of available disk space for file systems on which the invoking user has appropriate read access. df is typically implemented using the statfs or statvfs system calls. Disk Space Usage df help df --help version information df --version df (GNU coreutils) 8.22 Copyright (C) 2013 Free Software Foundation, Inc. The df command provides an option to display sizes in Human Readable formats by using '-h' (prints the results in human readable format (e.g., 1K 2M 3G)). Human Readable df -h in Bytes df -k in MB df -m Inodes df -i File System Type df -T Filesystem Type Size Used Avail Use% Mounted on /dev/vda1 ext4 50G 17G 31G 35% / devtmpfs devtmpfs 485M 0 485M 0% /dev The syntax for the df command is as follows: Usage: ls [OPTION]... [FILE]... OPTIONS: ##### -a, --all include pseudo, duplicate, inaccessible file systems ##### -B, --block-size=SIZE scale sizes by SIZE before printing them; e.g., '-BM' prints sizes in units of 1,048,576 bytes; see SIZE format below ##### --direct show statistics for a file instead of mount point ##### --total produce a grand total ##### -h, --human-readable print sizes in human readable format (e.g., 1K 234M 2G) ##### -H, --si likewise, but use powers of 1000 not 1024 ##### -i, --inodes list inode information instead of block usage ##### -k like --block-size=1K ##### -l, --local limit listing to local file systems ##### --no-sync do not invoke sync before getting usage info (default) ##### --output[=FIELD_LIST] use the output format defined by FIELD_LIST, or print all fields if FIELD_LIST is omitted. ##### -P, --portability use the POSIX output format ##### --sync invoke sync before getting usage info ##### -t, --type=TYPE limit listing to file systems of type TYPE ##### -T, --print-type print file system type ##### -x, --exclude-type=TYPE limit listing to file systems not of type TYPE ##### -v (ignored) ##### --help display this help and exit ##### --version output version information and exit Display values are in units of the first available SIZE from --block-size, and the DF_BLOCK_SIZE, BLOCK_SIZE and BLOCKSIZE environment variables. Otherwise, units default to 1024 bytes (or 512 if POSIXLY_CORRECT is set). SIZE is an integer and optional unit (example: 10M is 10*1024*1024). Units are K, M, G, T, P, E, Z, Y (powers of 1024) or KB, MB, ... (powers of 1000). FIELD_LIST is a comma-separated list of columns to be included. Valid field names are: 'source', 'fstype', 'itotal', 'iused', 'iavail', 'ipcent', 'size', 'used', 'avail', 'pcent', 'file' and 'target' (see info page).
cr55

How to use the curl command in Linux

command line tool and library for transferring data with URLs. cURL (pronounced 'curl') is a computer software project providing a library (libcurl) and command-line tool (curl) for transferring data using various network protocols. The name stands for "Client URL", which was first released in 1997. How to Install Curl in Linux Install curl on Ubuntu/Debian apt-get install curl Install curl on RHEL / CentOS / Fedora sudo dnf install curl Install curl on OpenSUSE zypper install curl Verification curl --version curl -V curl -h curl 7.29.0 (x86_64-redhat-linux-gnu) libcurl/7.29.0 NSS/3.36 zlib/1.2.7 libidn/1.28 libssh2/1.4.3 Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp scp sftp smtp smtps telnet tftp Features: AsynchDNS GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz unix-sockets Basic use of cURL involves simply typing curl at the command line, followed by the URL of the output to retrieve: Get the main page from a web-server: curl https://installmd.com/ using SCP using a private key curl -u username: --key ~/.ssh/id_rsa scp://example.com/~/file.txt Download to a File curl -o index.html https://installmd.com/ % Total % Received 100 32760 curl supports both HTTP and SOCKS proxy servers, with optional authentication. HTTP proxy curl -x proxy:8000 ftp://ftp.leachsite.com/README Ranges curl -r 0-99 https://installmd.com/ Uploading curl -T uploadfile -u user:passwd ftp://ftp.upload.com/myfile Verbose / Debug curl -v https://installmd.com/ curl --trace trace.txt www.haxx.se POST curl -d "name=Rafael%20Sagula&phone=3320780" http://www.where.com/guest.cgi * About to connect() to installmd.com port 443 (#0) * Trying 172.1.2.3... * Connected to installmd.com (3.63.3.3) port 443 (#0) * Initializing NSS with certpath: sql:/etc/pki/nssdb * CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none The syntax for the curl command is as follows: Usage: curl [options...] <url> OPTIONS: ##### --abstract-unix-socket <path> (HTTP) Connect through an abstract Unix domain socket, instead of using the network. Note: netstat shows the path of an abstract socket prefixed with '@', however the <path> argument should not have this leading character. Added in 7.53.0. ##### --alt-svc <file name> (HTTPS) WARNING: this option is experimental. Do not use in production. This option enables the alt-svc parser in curl. If the file name points to an existing alt-svc cache file, that will be used. After a completed transfer, the cache will be saved to the file name again if it has been modified. Specify a "" file name (zero length) to avoid loading/saving and make curl just handle the cache in memory. If this option is used several times, curl will load contents from all the files but the last one will be used for saving. Added in 7.64.1. ##### --anyauth (HTTP) Tells curl to figure out authentication method by itself, and use the most secure one the remote site claims to support. This is done by first doing a request and checking the response-headers, thus possibly inducing an extra network round-trip. This is used instead of setting a specific authentication method, which you can do with --basic, --digest, --ntlm, and --negotiate. Using --anyauth is not recommended if you do uploads from stdin, since it may require data to be sent twice and then the client must be able to rewind. If the need should arise when uploading from stdin, the upload operation will fail. Used together with -u, --user. See also --proxy-anyauth, --basic and --digest. ##### -a, --append (FTP SFTP) When used in an upload, this makes curl append to the target file instead of overwriting it. If the remote file doesn't exist, it will be created. Note that this flag is ignored by some SFTP servers (including OpenSSH). ##### --aws-sigv4 <provider1[:provider2[:region[:service]]]> Use AWS V4 signature authentication in the transfer. The provider argument is a string that is used by the algorithm when creating outgoing authentication headers. The region argument is a string that points to a geographic area of a resources collection (region-code) when the region name is omitted from the endpoint. The service argument is a string that points to a function provided by a cloud (service-code) when the service name is omitted from the endpoint. Added in 7.75.0. ##### --basic (HTTP) Tells curl to use HTTP Basic authentication with the remote host. This is the default and this option is usually pointless, unless you use it to override a previously set option that sets a different authentication method (such as --ntlm, --digest, or --negotiate). Used together with -u, --user. See also --proxy-basic. ##### --cacert <file> (TLS) Tells curl to use the specified certificate file to verify the peer. The file may contain multiple CA certificates. ##### --capath <dir> (TLS) Tells curl to use the specified certificate directory to verify the peer. Multiple paths can be provided by separating them with ":" (e.g. "path1:path2:path3"). The certificates must be in PEM format, and if curl is built against OpenSSL, the directory must have been processed using the c_rehash utility supplied with OpenSSL. Using --capath can allow OpenSSL-powered curl to make SSL-connections much more efficiently than using --cacert if the --cacert file contains many CA certificates. If this option is set, the default capath value will be ignored, and if it is used several times, the last one will be used. ##### --cert-status (TLS) Tells curl to verify the status of the server certificate by using the Certificate Status Request (aka. OCSP stapling) TLS extension. If this option is enabled and the server sends an invalid (e.g. expired) response, if the response suggests that the server certificate has been revoked, or no response at all is received, the verification fails. This is currently only implemented in the OpenSSL, GnuTLS and NSS backends. Added in 7.41.0. ##### --cert-type <type> (TLS) Tells curl what type the provided client certificate is using. PEM, DER, ENG and P12 are recognized types. If not specified, PEM is assumed. If this option is used several times, the last one will be used. See also -E, --cert, --key and --key-type. ##### -E, --cert <certificate[:password]> (TLS) Tells curl to use the specified client certificate file when getting a file with HTTPS, FTPS or another SSL-based protocol. ##### --ciphers <list of ciphers> (TLS) Specifies which ciphers to use in the connection. The list of ciphers must specify valid ciphers. Read up on SSL cipher list details on this URL: https://curl.se/docs/ssl-ciphers.html If this option is used several times, the last one will be used. ##### --compressed-ssh (SCP SFTP) Enables built-in SSH compression. This is a request, not an order; the server may or may not do it. Added in 7.56.0. ##### --compressed (HTTP) Request a compressed response using one of the algorithms curl supports, and automatically decompress the content. Headers are not modified. If this option is used and the server sends an unsupported encoding, curl will report an error. ##### -K, --config <file> Specify a text file to read curl arguments from. ##### --connect-timeout <seconds> Maximum time in seconds that you allow curl's connection to take. This only limits the connection phase, so if curl connects within the given period it will continue - if not it will exit. Since version 7.32.0, this option accepts decimal values. If this option is used several times, the last one will be used. See also -m, --max-time. ##### --connect-to <HOST1:PORT1:HOST2:PORT2> For a request to the given HOST1:PORT1 pair, connect to HOST2:PORT2 instead. This option is suitable to direct requests at a specific server, e.g. at a specific cluster node in a cluster of servers. This option is only used to establish the network connection. It does NOT affect the hostname/port that is used for TLS/SSL (e.g. SNI, certificate verification) or for the application protocols. "HOST1" and "PORT1" may be the empty string, meaning "any host/port". "HOST2" and "PORT2" may also be the empty string, meaning "use the request's original host/port". A "host" specified to this option is compared as a string, so it needs to match the name used in request URL. It can be either numerical such as "127.0.0.1" or the full host name such as "example.org". This option can be used many times to add many connect rules. See also --resolve and -H, --header. Added in 7.49.0. ##### -C, --continue-at <offset> Continue/Resume a previous file transfer at the given offset. The given offset is the exact number of bytes that will be skipped, counting from the beginning of the source file before it is transferred to the destination. If used with uploads, the FTP server command SIZE will not be used by curl. Use "-C -" to tell curl to automatically find out where/how to resume the transfer. It then uses the given output/input files to figure that out. If this option is used several times, the last one will be used. See also -r, --range. ##### -c, --cookie-jar <filename> (HTTP) Specify to which file you want curl to write all cookies after a completed operation. Curl writes all cookies from its in-memory cookie storage to the given file at the end of operations. If no cookies are known, no data will be written. The file will be written using the Netscape cookie file format. If you set the file name to a single dash, "-", the cookies will be written to stdout. This command line option will activate the cookie engine that makes curl record and use cookies. Another way to activate it is to use the -b, --cookie option. If the cookie jar can't be created or written to, the whole curl operation won't fail or even report an error clearly. Using -v, --verbose will get a warning displayed, but that is the only visible feedback you get about this possibly lethal situation. If this option is used several times, the last specified file name will be used. ##### -b, --cookie <data|filename> (HTTP) Pass the data to the HTTP server in the Cookie header. It is supposedly the data previously received from the server in a "Set-Cookie:" line. The data should be in the format "NAME1=VALUE1; NAME2=VALUE2". If no '=' symbol is used in the argument, it is instead treated as a filename to read previously stored cookie from. This option also activates the cookie engine which will make curl record incoming cookies, which may be handy if you're using this in combination with the -L, --location option or do multiple URL transfers on the same invoke. If the file name is exactly a minus ("-"), curl will instead read the contents from stdin. The file format of the file to read cookies from should be plain HTTP headers (Set-Cookie style) or the Netscape/Mozilla cookie file format. The file specified with -b, --cookie is only used as input. No cookies will be written to the file. To store cookies, use the -c, --cookie-jar option. If you use the Set-Cookie file format and don't specify a domain then the cookie is not sent since the domain will never match. To address this, set a domain in Set-Cookie line (doing that will include sub-domains) or preferably: use the Netscape format. This option can be used multiple times. Users very often want to both read cookies from a file and write updated cookies back to a file, so using both -b, --cookie and -c, --cookie-jar in the same command line is common. ##### --create-dirs When used in conjunction with the -o, --output option, curl will create the necessary local directory hierarchy as needed. This option creates the dirs mentioned with the -o, --output option, nothing else. If the --output file name uses no dir or if the dirs it mentions already exist, no dir will be created. Created dirs are made with mode 0750 on unix style file systems. To create remote directories when using FTP or SFTP, try --ftp-create-dirs. ##### --create-file-mode <mode> (SFTP SCP FILE) When curl is used to create files remotely using one of the supported protocols, this option allows the user to set which 'mode' to set on the file at creation time, instead of the default 0644. This option takes an octal number as argument. See also --ftp-create-dirs. Added in 7.75.0. ##### --crlf (FTP SMTP) Convert LF to CRLF in upload. Useful for MVS (OS/390). (SMTP added in 7.40.0) ##### --crlfile <file> (TLS) Provide a file using PEM format with a Certificate Revocation List that may specify peer certificates that are to be considered revoked. If this option is used several times, the last one will be used. Added in 7.19.7. ##### --curves <algorithm list> (TLS) Tells curl to request specific curves to use during SSL session establishment according to RFC 8422, 5.1. Multiple algorithms can be provided by separating them with ":" (e.g. "X25519:P-521"). The parameter is available identically in the "openssl s_client/s_server" utilities. ##### --curves allows a OpenSSL powered curl to make SSL-connections with exactly the (EC) curve requested by the client, avoiding intransparent client/server negotiations. If this option is set, the default curves list built into openssl will be ignored. Added in 7.73.0. ##### --data-ascii <data> (HTTP) This is just an alias for -d, --data. ##### --data-binary <data> (HTTP) This posts data exactly as specified with no extra processing whatsoever. If you start the data with the letter @, the rest should be a filename. Data is posted in a similar manner as -d, --data does, except that newlines and carriage returns are preserved and conversions are never done. Like -d, --data the default content-type sent to the server is application/x-www-form-urlencoded. If you want the data to be treated as arbitrary binary data by the server then set the content-type to octet-stream: -H "Content-Type: application/octet-stream". If this option is used several times, the ones following the first will append data as described in -d, --data. ##### --data-raw <data> (HTTP) This posts data similarly to -d, --data but without the special interpretation of the @ character. See also -d, --data. Added in 7.43.0. ##### --data-urlencode <data> (HTTP) This posts data, similar to the other -d, --data options with the exception that this performs URL-encoding. ##### -d, --data <data> (HTTP MQTT) Sends the specified data in a POST request to the HTTP server, in the same way that a browser does when a user has filled in an HTML form and presses the submit button. This will cause curl to pass the data to the server using the content-type application/x-www-form-urlencoded. Compare to -F, --form. ##### --data-raw is almost the same but does not have a special interpretation of the @ character. To post data purely binary, you should instead use the --data-binary option. To URL-encode the value of a form field you may use --data-urlencode. If any of these options is used more than once on the same command line, the data pieces specified will be merged together with a separating &-symbol. Thus, using '-d name=daniel -d skill=lousy' would generate a post chunk that looks like 'name=daniel&skill=lousy'. If you start the data with the letter @, the rest should be a file name to read the data from, or - if you want curl to read the data from stdin. Posting data from a file named 'foobar' would thus be done with -d, --data @foobar. When -d, --data is told to read from a file like that, carriage returns and newlines will be stripped out. If you don't want the @ character to have a special interpretation use --data-raw instead. See also --data-binary, --data-urlencode and --data-raw. This option overrides -F, --form and -I, --head and -T, --upload-file. ##### --delegation <LEVEL> (GSS/kerberos) Set LEVEL to tell the server what it is allowed to delegate when it comes to user credentials. none Don't allow any delegation. policy Delegates if and only if the OK-AS-DELEGATE flag is set in the Kerberos service ticket, which is a matter of realm policy. always Unconditionally allow the server to delegate. ##### --digest (HTTP) Enables HTTP Digest authentication. This is an authentication scheme that prevents the password from being sent over the wire in clear text. Use this in combination with the normal -u, --user option to set user name and password. If this option is used several times, only the first one is used. See also -u, --user, --proxy-digest and --anyauth. This option overrides --basic and --ntlm and --negotiate. ##### --disable-eprt (FTP) Tell curl to disable the use of the EPRT and LPRT commands when doing active FTP transfers. Curl will normally always first attempt to use EPRT, then LPRT before using PORT, but with this option, it will use PORT right away. EPRT and LPRT are extensions to the original FTP protocol, and may not work on all servers, but they enable more functionality in a better way than the traditional PORT command. ##### --eprt can be used to explicitly enable EPRT again and --no-eprt is an alias for --disable-eprt. If the server is accessed using IPv6, this option will have no effect as EPRT is necessary then. Disabling EPRT only changes the active behavior. If you want to switch to passive mode you need to not use -P, --ftp-port or force it with --ftp-pasv. ##### --disable-epsv (FTP) (FTP) Tell curl to disable the use of the EPSV command when doing passive FTP transfers. Curl will normally always first attempt to use EPSV before PASV, but with this option, it will not try using EPSV. ##### --epsv can be used to explicitly enable EPSV again and --no-epsv is an alias for --disable-epsv. If the server is an IPv6 host, this option will have no effect as EPSV is necessary then. Disabling EPSV only changes the passive behavior. If you want to switch to active mode you need to use -P, --ftp-port. ##### -q, --disable If used as the first parameter on the command line, the curlrc config file will not be read and used. See the -K, --config for details on the default config file search path. ##### --disallow-username-in-url (HTTP) This tells curl to exit if passed a url containing a username. See also --proto. Added in 7.61.0. ##### --dns-interface <interface> (DNS) Tell curl to send outgoing DNS requests through <interface>. This option is a counterpart to --interface (which does not affect DNS). The supplied string must be an interface name (not an address). See also --dns-ipv4-addr and --dns-ipv6-addr. --dns-interface requires that the underlying libcurl was built to support c-ares. Added in 7.33.0. ##### --dns-ipv4-addr <address> (DNS) Tell curl to bind to <ip-address> when making IPv4 DNS requests, so that the DNS requests originate from this address. The argument should be a single IPv4 address. See also --dns-interface and --dns-ipv6-addr. --dns-ipv4-addr requires that the underlying libcurl was built to support c-ares. Added in 7.33.0. ##### --dns-ipv6-addr <address> (DNS) Tell curl to bind to <ip-address> when making IPv6 DNS requests, so that the DNS requests originate from this address. The argument should be a single IPv6 address. See also --dns-interface and --dns-ipv4-addr. --dns-ipv6-addr requires that the underlying libcurl was built to support c-ares. Added in 7.33.0. ##### --dns-servers <addresses> Set the list of DNS servers to be used instead of the system default. The list of IP addresses should be separated with commas. Port numbers may also optionally be given as :<port-number> after each IP address. ##### --dns-servers requires that the underlying libcurl was built to support c-ares. Added in 7.33.0. ##### --doh-cert-status (all) Same as --cert-status but used for DOH (DNS-over-HTTPS). Added in 7.76.0. ##### --doh-insecure (all) Same as -k, --insecure but used for DOH (DNS-over-HTTPS). Added in 7.76.0. ##### --doh-url <URL> (all) Specifies which DNS-over-HTTPS (DOH) server to use to resolve hostnames, instead of using the default name resolver mechanism. The URL must be HTTPS. Some SSL options that you set for your transfer will apply to DOH since the name lookups take place over SSL. However, the certificate verification settings are not inherited and can be controlled separately via --doh-insecure and --doh-cert-status. If this option is used several times, the last one will be used. Added in 7.62.0. ##### -D, --dump-header <filename> (HTTP FTP) Write the received protocol headers to the specified file. This option is handy to use when you want to store the headers that an HTTP site sends to you. Cookies from the headers could then be read in a second curl invocation by using the -b, --cookie option! The -c, --cookie-jar option is a better way to store cookies. If no headers are received, the use of this option will create an empty file. When used in FTP, the FTP server response lines are considered being "headers" and thus are saved there. If this option is used several times, the last one will be used. See also -o, --output. ##### --egd-file <file> (TLS) Specify the path name to the Entropy Gathering Daemon socket. The socket is used to seed the random engine for SSL connections. See also --random-file. ##### --engine <name> (TLS) Select the OpenSSL crypto engine to use for cipher operations. Use --engine list to print a list of build-time supported engines. Note that not all (or none) of the engines may be available at run-time. ##### --etag-compare <file> (HTTP) This option makes a conditional HTTP request for the specific ETag read from the given file by sending a custom If-None-Match header using the extracted ETag. For correct results, make sure that specified file contains only a single line with a desired ETag. An empty file is parsed as an empty ETag. Use the option --etag-save to first save the ETag from a response, and then use this option to compare using the saved ETag in a subsequent request. COMPARISON: There are 2 types of comparison or ETags: Weak and Strong. This option expects, and uses a strong comparison. Added in 7.68.0. ##### --etag-save <file> (HTTP) This option saves an HTTP ETag to the specified file. Etag is usually part of headers returned by a request. When server sends an ETag, it must be enveloped by a double quote. This option extracts the ETag without the double quotes and saves it into the <file>. A server can send a weak ETag which is prefixed by "W/". This identifier is not considered, and only relevant ETag between quotation marks is parsed. It an ETag wasn't sent by the server or it cannot be parsed, an empty file is created. Added in 7.68.0. ##### --expect100-timeout <seconds> (HTTP) Maximum time in seconds that you allow curl to wait for a 100-continue response when curl emits an Expects: 100-continue header in its request. By default curl will wait one second. This option accepts decimal values! When curl stops waiting, it will continue as if the response has been received. See also --connect-timeout. Added in 7.47.0. ##### --fail-early Fail and exit on the first detected transfer error. When curl is used to do multiple transfers on the command line, it will attempt to operate on each given URL, one by one. By default, it will ignore errors if there are more URLs given and the last URL's success will determine the error code curl returns. So early failures will be "hidden" by subsequent successful transfers. Using this option, curl will instead return an error on the first transfer that fails, independent of the amount of URLs that are given on the command line. This way, no transfer failures go undetected by scripts and similar. This option is global and does not need to be specified for each use of -:, --next. This option does not imply -f, --fail, which causes transfers to fail due to the server's HTTP status code. You can combine the two options, however note -f, --fail is not global and is therefore contained by -:, --next. Added in 7.52.0. ##### --fail-with-body (HTTP) Return an error on server errors where the HTTP response code is 400 or greater). In normal cases when an HTTP server fails to deliver a document, it returns an HTML document stating so (which often also describes why and more). This flag will still allow curl to output and save that content but also to return error 22. This is an alternative option to -f, --fail which makes curl fail for the same circumstances but without saving the content. See also -f, --fail. Added in 7.76.0. ##### -f, --fail (HTTP) Fail silently (no output at all) on server errors. This is mostly done to enable scripts etc to better deal with failed attempts. In normal cases when an HTTP server fails to deliver a document, it returns an HTML document stating so (which often also describes why and more). This flag will prevent curl from outputting that and return error 22. This method is not fail-safe and there are occasions where non-successful response codes will slip through, especially when authentication is involved (response codes 401 and 407). See also --fail-with-body. ##### --false-start (TLS) Tells curl to use false start during the TLS handshake. False start is a mode where a TLS client will start sending application data before verifying the server's Finished message, thus saving a round trip when performing a full handshake. This is currently only implemented in the NSS and Secure Transport (on iOS 7.0 or later, or OS X 10.9 or later) backends. Added in 7.42.0. ##### --form-string <name=string> (HTTP SMTP IMAP) Similar to -F, --form except that the value string for the named parameter is used literally. Leading '@' and '<' characters, and the ';type=' string in the value have no special meaning. Use this in preference to -F, --form if there's any possibility that the string value may accidentally trigger the '@' or '<' features of -F, --form. See also -F, --form. ##### -F, --form <name=content> (HTTP SMTP IMAP) For HTTP protocol family, this lets curl emulate a filled-in form in which a user has pressed the submit button. ##### --ftp-account <data> (FTP) When an FTP server asks for "account data" after user name and password has been provided, this data is sent off using the ACCT command. If this option is used several times, the last one will be used. Added in 7.13.0. ##### --ftp-alternative-to-user <command> (FTP) If authenticating with the USER and PASS commands fails, send this command. When connecting to Tumbleweed's Secure Transport server over FTPS using a client certificate, using "SITE AUTH" will tell the server to retrieve the username from the certificate. Added in 7.15.5. ##### --ftp-create-dirs (FTP SFTP) When an FTP or SFTP URL/operation uses a path that doesn't currently exist on the server, the standard behavior of curl is to fail. Using this option, curl will instead attempt to create missing directories. See also --create-dirs. ##### --ftp-method <method> (FTP) Control what method curl should use to reach a file on an FTP(S) server. The method argument should be one of the following alternatives: multicwd curl does a single CWD operation for each path part in the given URL. For deep hierarchies this means very many commands. This is how RFC 1738 says it should be done. This is the default but the slowest behavior. nocwd curl does no CWD at all. curl will do SIZE, RETR, STOR etc and give a full path to the server for all these commands. This is the fastest behavior. singlecwd curl does one CWD with the full target directory and then operates on the file "normally" (like in the multicwd case). This is somewhat more standards compliant than 'nocwd' but without the full penalty of 'multicwd'. Added in 7.15.1. ##### --ftp-pasv (FTP) Use passive mode for the data connection. Passive is the internal default behavior, but using this option can be used to override a previous -P, --ftp-port option. If this option is used several times, only the first one is used. Undoing an enforced passive really isn't doable but you must then instead enforce the correct -P, --ftp-port again. Passive mode means that curl will try the EPSV command first and then PASV, unless --disable-epsv is used. See also --disable-epsv. Added in 7.11.0. ##### -P, --ftp-port <address> (FTP) Reverses the default initiator/listener roles when connecting with FTP. This option makes curl use active mode. curl then tells the server to connect back to the client's specified address and port, while passive mode asks the server to setup an IP address and port for it to connect to. <address> should be one of: interface e.g. "eth0" to specify which interface's IP address you want to use (Unix only) IP address e.g. "192.168.10.1" to specify the exact IP address host name e.g. "my.host.domain" to specify the machine ##### - make curl pick the same IP address that is already used for the control connection If this option is used several times, the last one will be used. Disable the use of PORT with --ftp-pasv. Disable the attempt to use the EPRT command instead of PORT by using --disable-eprt. EPRT is really PORT++. Since 7.19.5, you can append ":[start]-[end]" to the right of the address, to tell curl what TCP port range to use. That means you specify a port range, from a lower to a higher number. A single number works as well, but do note that it increases the risk of failure since the port may not be available. See also --ftp-pasv and --disable-eprt. ##### --ftp-pret (FTP) Tell curl to send a PRET command before PASV (and EPSV). Certain FTP servers, mainly drftpd, require this non-standard command for directory listings as well as up and downloads in PASV mode. Added in 7.20.0. ##### --ftp-skip-pasv-ip (FTP) Tell curl to not use the IP address the server suggests in its response to curl's PASV command when curl connects the data connection. Instead curl will re-use the same IP address it already uses for the control connection. Since curl 7.74.0 this option is enabled by default. This option has no effect if PORT, EPRT or EPSV is used instead of PASV. See also --ftp-pasv. Added in 7.14.2. ##### --ftp-ssl-ccc-mode <active/passive> (FTP) Sets the CCC mode. The passive mode will not initiate the shutdown, but instead wait for the server to do it, and will not reply to the shutdown from the server. The active mode initiates the shutdown and waits for a reply from the server. See also --ftp-ssl-ccc. Added in 7.16.2. ##### --ftp-ssl-ccc (FTP) Use CCC (Clear Command Channel) Shuts down the SSL/TLS layer after authenticating. The rest of the control channel communication will be unencrypted. This allows NAT routers to follow the FTP transaction. The default mode is passive. See also --ssl and --ftp-ssl-ccc-mode. Added in 7.16.1. ##### --ftp-ssl-control (FTP) Require SSL/TLS for the FTP login, clear for transfer. Allows secure authentication, but non-encrypted data transfers for efficiency. Fails the transfer if the server doesn't support SSL/TLS. Added in 7.16.0. ##### -G, --get When used, this option will make all data specified with -d, --data, --data-binary or --data-urlencode to be used in an HTTP GET request instead of the POST request that otherwise would be used. The data will be appended to the URL with a '?' separator. If used in combination with -I, --head, the POST data will instead be appended to the URL with a HEAD request. If this option is used several times, only the first one is used. This is because undoing a GET doesn't make sense, but you should then instead enforce the alternative method you prefer. ##### -g, --globoff This option switches off the "URL globbing parser". When you set this option, you can specify URLs that contain the letters {}[] without having them being interpreted by curl itself. Note that these letters are not normal legal URL contents but they should be encoded according to the URI standard. ##### --happy-eyeballs-timeout-ms <milliseconds> Happy eyeballs is an algorithm that attempts to connect to both IPv4 and IPv6 addresses for dual-stack hosts, preferring IPv6 first for the number of milliseconds. If the IPv6 address cannot be connected to within that time then a connection attempt is made to the IPv4 address in parallel. The first connection to be established is the one that is used. The range of suggested useful values is limited. Happy Eyeballs RFC 6555 says "It is RECOMMENDED that connection attempts be paced 150-250 ms apart to balance human factors against network load." libcurl currently defaults to 200 ms. Firefox and Chrome currently default to 300 ms. If this option is used several times, the last one will be used. Added in 7.59.0. ##### --haproxy-protocol (HTTP) Send a HAProxy PROXY protocol v1 header at the beginning of the connection. This is used by some load balancers and reverse proxies to indicate the client's true IP address and port. This option is primarily useful when sending test requests to a service that expects this header. Added in 7.60.0. ##### -I, --head (HTTP FTP FILE) Fetch the headers only! HTTP-servers feature the command HEAD which this uses to get nothing but the header of a document. When used on an FTP or FILE file, curl displays the file size and last modification time only. ##### -H, --header <header/@file> (HTTP) Extra header to include in the request when sending HTTP to a server. ##### -h, --help <category> Usage help. This lists all commands of the <category>. If no arg was provided, curl will display the most important command line arguments. If the argument "all" was provided, curl will display all options available. If the argument "category" was provided, curl will display all categories and their meanings. ##### --hostpubmd5 <md5> (SFTP SCP) Pass a string containing 32 hexadecimal digits. The string should be the 128 bit MD5 checksum of the remote host's public key, curl will refuse the connection with the host unless the md5sums match. Added in 7.17.1. ##### --hsts <file name> (HTTPS) WARNING: this option is experimental. Do not use in production. This option enables HSTS for the transfer. If the file name points to an existing HSTS cache file, that will be used. After a completed transfer, the cache will be saved to the file name again if it has been modified. Specify a "" file name (zero length) to avoid loading/saving and make curl just handle HSTS in memory. If this option is used several times, curl will load contents from all the files but the last one will be used for saving. Added in 7.74.0. ##### --http0.9 (HTTP) Tells curl to be fine with HTTP version 0.9 response. HTTP/0.9 is a completely headerless response and therefore you can also connect with this to non-HTTP servers and still get a response since curl will simply transparently downgrade - if allowed. Since curl 7.66.0, HTTP/0.9 is disabled by default. ##### -0, --http1.0 (HTTP) Tells curl to use HTTP version 1.0 instead of using its internally preferred HTTP version. This option overrides --http1.1 and --http2. ##### --http1.1 (HTTP) Tells curl to use HTTP version 1.1. This option overrides -0, --http1.0 and --http2. Added in 7.33.0. ##### --http2-prior-knowledge (HTTP) Tells curl to issue its non-TLS HTTP requests using HTTP/2 without HTTP/1.1 Upgrade. It requires prior knowledge that the server supports HTTP/2 straight away. HTTPS requests will still do HTTP/2 the standard way with negotiated protocol version in the TLS handshake. ##### --http2-prior-knowledge requires that the underlying libcurl was built to support HTTP/2. This option overrides --http1.1 and -0, --http1.0 and --http2. Added in 7.49.0. ##### --http2 (HTTP) Tells curl to use HTTP version 2. See also --http1.1 and --http3. --http2 requires that the underlying libcurl was built to support HTTP/2. This option overrides --http1.1 and -0, --http1.0 and --http2-prior-knowledge. Added in 7.33.0. ##### --http3 (HTTP) WARNING: this option is experimental. Do not use in production. Tells curl to use HTTP version 3 directly to the host and port number used in the URL. A normal HTTP/3 transaction will be done to a host and then get redirected via Alt-SVc, but this option allows a user to circumvent that when you know that the target speaks HTTP/3 on the given host and port. This option will make curl fail if a QUIC connection cannot be established, it cannot fall back to a lower HTTP version on its own. See also --http1.1 and --http2. --http3 requires that the underlying libcurl was built to support HTTP/3. This option overrides --http1.1 and -0, --http1.0 and --http2 and --http2-prior-knowledge. Added in 7.66.0. ##### --ignore-content-length (FTP HTTP) For HTTP, Ignore the Content-Length header. This is particularly useful for servers running Apache 1.x, which will report incorrect Content-Length for files larger than 2 gigabytes. For FTP (since 7.46.0), skip the RETR command to figure out the size before downloading a file. This option doesn't work if libcurl was built to use hyper for HTTP. ##### -i, --include Include the HTTP response headers in the output. The HTTP response headers can include things like server name, cookies, date of the document, HTTP version and more... To view the request headers, consider the -v, --verbose option. See also -v, --verbose. ##### -k, --insecure (TLS) By default, every SSL connection curl makes is verified to be secure. This option allows curl to proceed and operate even for server connections otherwise considered insecure. The server connection is verified by making sure the server's certificate contains the right name and verifies successfully using the cert store. See this online resource for further details: https://curl.se/docs/sslcerts.html See also --proxy-insecure and --cacert. ##### --interface <name> Perform an operation using a specified interface. You can enter interface name, IP address or host name. An example could look like: curl --interface eth0:1 https://www.example.com/ If this option is used several times, the last one will be used. On Linux it can be used to specify a VRF, but the binary needs to either have CAP_NET_RAW or to be run as root. More information about Linux VRF: https://www.kernel.org/doc/Documentation/networking/vrf.txt See also --dns-interface. ##### -4, --ipv4 This option tells curl to resolve names to IPv4 addresses only, and not for example try IPv6. See also --http1.1 and --http2. This option overrides -6, --ipv6. ##### -6, --ipv6 This option tells curl to resolve names to IPv6 addresses only, and not for example try IPv4. See also --http1.1 and --http2. This option overrides -4, --ipv4. ##### -j, --junk-session-cookies (HTTP) When curl is told to read cookies from a given file, this option will make it discard all "session cookies". This will basically have the same effect as if a new session is started. Typical browsers always discard session cookies when they're closed down. See also -b, --cookie and -c, --cookie-jar. ##### --keepalive-time <seconds> This option sets the time a connection needs to remain idle before sending keepalive probes and the time between individual keepalive probes. It is currently effective on operating systems offering the TCP_KEEPIDLE and TCP_KEEPINTVL socket options (meaning Linux, recent AIX, HP-UX and more). This option has no effect if --no-keepalive is used. If this option is used several times, the last one will be used. If unspecified, the option defaults to 60 seconds. Added in 7.18.0. ##### --key-type <type> (TLS) Private key file type. Specify which type your --key provided private key is. DER, PEM, and ENG are supported. If not specified, PEM is assumed. If this option is used several times, the last one will be used. ##### --key <key> (TLS SSH) Private key file name. Allows you to provide your private key in this separate file. For SSH, if not specified, curl tries the following candidates in order: '~/.ssh/id_rsa', '~/.ssh/id_dsa', './id_rsa', './id_dsa'. If curl is built against OpenSSL library, and the engine pkcs11 is available, then a PKCS#11 URI (RFC 7512) can be used to specify a private key located in a PKCS#11 device. A string beginning with "pkcs11:" will be interpreted as a PKCS#11 URI. If a PKCS#11 URI is provided, then the --engine option will be set as "pkcs11" if none was provided and the --key-type option will be set as "ENG" if none was provided. If this option is used several times, the last one will be used. ##### --krb <level> (FTP) Enable Kerberos authentication and use. The level must be entered and should be one of 'clear', 'safe', 'confidential', or 'private'. Should you use a level that is not one of these, 'private' will instead be used. If this option is used several times, the last one will be used. ##### --krb requires that the underlying libcurl was built to support Kerberos. ##### --libcurl <file> Append this option to any ordinary curl command line, and you will get a libcurl-using C source code written to the file that does the equivalent of what your command-line operation does! If this option is used several times, the last given file name will be used. Added in 7.16.1. ##### --limit-rate <speed> Specify the maximum transfer rate you want curl to use - for both downloads and uploads. This feature is useful if you have a limited pipe and you'd like your transfer not to use your entire bandwidth. To make it slower than it otherwise would be. The given speed is measured in bytes/second, unless a suffix is appended. Appending 'k' or 'K' will count the number as kilobytes, 'm' or 'M' makes it megabytes, while 'g' or 'G' makes it gigabytes. Examples: 200K, 3m and 1G. If you also use the -Y, --speed-limit option, that option will take precedence and might cripple the rate-limiting slightly, to help keeping the speed-limit logic working. If this option is used several times, the last one will be used. ##### -l, --list-only (FTP POP3) (FTP) When listing an FTP directory, this switch forces a name-only view. This is especially useful if the user wants to machine-parse the contents of an FTP directory since the normal directory view doesn't use a standard look or format. When used like this, the option causes a NLST command to be sent to the server instead of LIST. Note: Some FTP servers list only files in their response to NLST; they do not include sub-directories and symbolic links. (POP3) When retrieving a specific email from POP3, this switch forces a LIST command to be performed instead of RETR. This is particularly useful if the user wants to see if a specific message id exists on the server and what size it is. Note: When combined with -X, --request, this option can be used to send an UIDL command instead, so the user may use the email's unique identifier rather than it's message id to make the request. Added in 4.0. ##### --local-port <num/range> Set a preferred single number or range (FROM-TO) of local port numbers to use for the connection(s). Note that port numbers by nature are a scarce resource that will be busy at times so setting this range to something too narrow might cause unnecessary connection setup failures. Added in 7.15.2. ##### --location-trusted (HTTP) Like -L, --location, but will allow sending the name + password to all hosts that the site may redirect to. This may or may not introduce a security breach if the site redirects you to a site to which you'll send your authentication info (which is plaintext in the case of HTTP Basic authentication). See also -u, --user. ##### -L, --location (HTTP) If the server reports that the requested page has moved to a different location (indicated with a Location: header and a 3XX response code), this option will make curl redo the request on the new place. ##### --login-options <options> (IMAP POP3 SMTP) Specify the login options to use during server authentication. You can use the login options to specify protocol specific options that may be used during authentication. At present only IMAP, POP3 and SMTP support login options. For more information about the login options please see RFC 2384, RFC 5092 and IETF draft draft-earhart-url-smtp-00.txt If this option is used several times, the last one will be used. Added in 7.34.0. ##### --mail-auth <address> (SMTP) Specify a single address. This will be used to specify the authentication address (identity) of a submitted message that is being relayed to another server. See also --mail-rcpt and --mail-from. Added in 7.25.0. ##### --mail-from <address> (SMTP) Specify a single address that the given mail should get sent from. See also --mail-rcpt and --mail-auth. Added in 7.20.0. ##### --mail-rcpt-allowfails (SMTP) When sending data to multiple recipients, by default curl will abort SMTP conversation if at least one of the recipients causes RCPT TO command to return an error. The default behavior can be changed by passing --mail-rcpt-allowfails command-line option which will make curl ignore errors and proceed with the remaining valid recipients. In case when all recipients cause RCPT TO command to fail, curl will abort SMTP conversation and return the error received from to the last RCPT TO command. Added in 7.69.0. ##### --mail-rcpt <address> (SMTP) Specify a single address, user name or mailing list name. Repeat this option several times to send to multiple recipients. When performing a mail transfer, the recipient should specify a valid email address to send the mail to. When performing an address verification (VRFY command), the recipient should be specified as the user name or user name and domain (as per Section 3.5 of RFC 5321). (Added in 7.34.0) When performing a mailing list expand (EXPN command), the recipient should be specified using the mailing list name, such as "Friends" or "London-Office". (Added in 7.34.0) Added in 7.20.0. ##### -M, --manual Manual. Display the huge help text. ##### --max-filesize <bytes> Specify the maximum size (in bytes) of a file to download. If the file requested is larger than this value, the transfer will not start and curl will return with exit code 63. A size modifier may be used. For example, Appending 'k' or 'K' will count the number as kilobytes, 'm' or 'M' makes it megabytes, while 'g' or 'G' makes it gigabytes. Examples: 200K, 3m and 1G. (Added in 7.58.0) NOTE: The file size is not always known prior to download, and for such files this option has no effect even if the file transfer ends up being larger than this given limit. This concerns both FTP and HTTP transfers. See also --limit-rate. ##### --max-redirs <num> (HTTP) Set maximum number of redirection-followings allowed. When -L, --location is used, is used to prevent curl from following redirections too much. By default, the limit is set to 50 redirections. Set this option to -1 to make it unlimited. If this option is used several times, the last one will be used. ##### -m, --max-time <seconds> Maximum time in seconds that you allow the whole operation to take. This is useful for preventing your batch jobs from hanging for hours due to slow networks or links going down. Since 7.32.0, this option accepts decimal values, but the actual timeout will decrease in accuracy as the specified timeout increases in decimal precision. If this option is used several times, the last one will be used. See also --connect-timeout. ##### --metalink This option was previously used to specify a metalink resource. Metalink support has unfortunately been disabled in curl since 7.78.0 due to security reasons. Added in 7.27.0. ##### --negotiate (HTTP) Enables Negotiate (SPNEGO) authentication. This option requires a library built with GSS-API or SSPI support. Use -V, --version to see if your curl supports GSS-API/SSPI or SPNEGO. When using this option, you must also provide a fake -u, --user option to activate the authentication code properly. Sending a '-u :' is enough as the user name and password from the -u, --user option aren't actually used. If this option is used several times, only the first one is used. See also --basic, --ntlm, --anyauth and --proxy-negotiate. ##### --netrc-file <filename> This option is similar to -n, --netrc, except that you provide the path (absolute or relative) to the netrc file that curl should use. You can only specify one netrc file per invocation. If several --netrc-file options are provided, the last one will be used. It will abide by --netrc-optional if specified. This option overrides -n, --netrc. Added in 7.21.5. ##### --netrc-optional Very similar to -n, --netrc, but this option makes the .netrc usage optional and not mandatory as the -n, --netrc option does. See also --netrc-file. This option overrides -n, --netrc. ##### -n, --netrc Makes curl scan the .netrc (_netrc on Windows) file in the user's home directory for login name and password. This is typically used for FTP on Unix. If used with HTTP, curl will enable user authentication. See netrc(5) ftp(1) for details on the file format. Curl will not complain if that file doesn't have the right permissions (it should not be either world- or group-readable). The environment variable "HOME" is used to find the home directory. A quick and very simple example of how to setup a .netrc to allow curl to FTP to the machine host.domain.com with user name 'myself' and password 'secret' should look similar to: machine host.domain.com login myself password secret ##### -:, --next Tells curl to use a separate operation for the following URL and associated options. This allows you to send several URL requests, each with their own specific options, for example, such as different user names or custom requests for each. ##### -:, --next will reset all local options and only global ones will have their values survive over to the operation following the -:, --next instruction. Global options include -v, --verbose, --trace, --trace-ascii and --fail-early. For example, you can do both a GET and a POST in a single command line: curl www1.example.com --next -d postthis www2.example.com Added in 7.36.0. ##### --no-alpn (HTTPS) Disable the ALPN TLS extension. ALPN is enabled by default if libcurl was built with an SSL library that supports ALPN. ALPN is used by a libcurl that supports HTTP/2 to negotiate HTTP/2 support with the server during https sessions. See also --no-npn and --http2. --no-alpn requires that the underlying libcurl was built to support TLS. Added in 7.36.0. ##### -N, --no-buffer Disables the buffering of the output stream. In normal work situations, curl will use a standard buffered output stream that will have the effect that it will output the data in chunks, not necessarily exactly when the data arrives. Using this option will disable that buffering. Note that this is the negated option name documented. You can thus use --buffer to enforce the buffering. ##### --no-keepalive Disables the use of keepalive messages on the TCP connection. curl otherwise enables them by default. Note that this is the negated option name documented. You can thus use --keepalive to enforce keepalive. ##### --no-npn (HTTPS) Disable the NPN TLS extension. NPN is enabled by default if libcurl was built with an SSL library that supports NPN. NPN is used by a libcurl that supports HTTP/2 to negotiate HTTP/2 support with the server during https sessions. See also --no-alpn and --http2. --no-npn requires that the underlying libcurl was built to support TLS. Added in 7.36.0. ##### --no-progress-meter Option to switch off the progress meter output without muting or otherwise affecting warning and informational messages like -s, --silent does. Note that this is the negated option name documented. You can thus use --progress-meter to enable the progress meter again. See also -v, --verbose and -s, --silent. Added in 7.67.0. ##### --no-sessionid (TLS) Disable curl's use of SSL session-ID caching. By default all transfers are done using the cache. Note that while nothing should ever get hurt by attempting to reuse SSL session-IDs, there seem to be broken SSL implementations in the wild that may require you to disable this in order for you to succeed. Note that this is the negated option name documented. You can thus use --sessionid to enforce session-ID caching. Added in 7.16.0. ##### --noproxy <no-proxy-list> Comma-separated list of hosts which do not use a proxy, if one is specified. The only wildcard is a single * character, which matches all hosts, and effectively disables the proxy. Each name in this list is matched as either a domain which contains the hostname, or the hostname itself. For example, local.com would match local.com, local.com:80, and www.local.com, but not www.notlocal.com. Since 7.53.0, This option overrides the environment variables that disable the proxy. If there's an environment variable disabling a proxy, you can set noproxy list to "" to override it. Added in 7.19.4. ##### --ntlm-wb (HTTP) Enables NTLM much in the style --ntlm does, but hand over the authentication to the separate binary ntlmauth application that is executed when needed. See also --ntlm and --proxy-ntlm. ##### --ntlm (HTTP) Enables NTLM authentication. The NTLM authentication method was designed by Microsoft and is used by IIS web servers. It is a proprietary protocol, reverse-engineered by clever people and implemented in curl based on their efforts. This kind of behavior should not be endorsed, you should encourage everyone who uses NTLM to switch to a public and documented authentication method instead, such as Digest. If you want to enable NTLM for your proxy authentication, then use --proxy-ntlm. If this option is used several times, only the first one is used. See also --proxy-ntlm. --ntlm requires that the underlying libcurl was built to support TLS. This option overrides --basic and --negotiate and --digest and --anyauth. ##### --oauth2-bearer <token> (IMAP POP3 SMTP HTTP) Specify the Bearer Token for OAUTH 2.0 server authentication. The Bearer Token is used in conjunction with the user name which can be specified as part of the --url or -u, --user options. The Bearer Token and user name are formatted according to RFC 6750. If this option is used several times, the last one will be used. ##### --output-dir <dir> This option specifies the directory in which files should be stored, when -O, --remote-name or -o, --output are used. The given output directory is used for all URLs and output options on the command line, up until the first -:, --next. If the specified target directory doesn't exist, the operation will fail unless --create-dirs is also used. If this option is used multiple times, the last specified directory will be used. See also -O, --remote-name and -J, --remote-header-name. Added in 7.73.0. ##### -o, --output <file> Write output to <file> instead of stdout. If you are using {} or [] to fetch multiple documents, you should quote the URL and you can use '#' followed by a number in the <file> specifier. That variable will be replaced with the current string for the URL being fetched. Like in: curl "http://{one,two}.example.com" -o "file_#1.txt" or use several variables like: curl "http://{site,host}.host[1-5].com" -o "#1_#2" You may use this option as many times as the number of URLs you have. For example, if you specify two URLs on the same command line, you can use it like this: curl -o aa example.com -o bb example.net and the order of the -o options and the URLs doesn't matter, just that the first -o is for the first URL and so on, so the above command line can also be written as curl example.com example.net -o aa -o bb See also the --create-dirs option to create the local directories dynamically. Specifying the output as '-' (a single dash) will force the output to be done to stdout. See also -O, --remote-name, --remote-name-all and -J, --remote-header-name. ##### --parallel-immediate When doing parallel transfers, this option will instruct curl that it should rather prefer opening up more connections in parallel at once rather than waiting to see if new transfers can be added as multiplexed streams on another connection. See also -Z, --parallel and --parallel-max. Added in 7.68.0. ##### --parallel-max When asked to do parallel transfers, using -Z, --parallel, this option controls the maximum amount of transfers to do simultaneously. The default is 50. See also -Z, --parallel. Added in 7.66.0. ##### -Z, --parallel Makes curl perform its transfers in parallel as compared to the regular serial manner. Added in 7.66.0. ##### --pass <phrase> (SSH TLS) Passphrase for the private key If this option is used several times, the last one will be used. ##### --path-as-is Tell curl to not handle sequences of /../ or /./ in the given URL path. Normally curl will squash or merge them according to standards but with this option set you tell it not to do that. Added in 7.42.0. ##### --pinnedpubkey <hashes> (TLS) Tells curl to use the specified public key file (or hashes) to verify the peer. This can be a path to a file which contains a single public key in PEM or DER format, or any number of base64 encoded sha256 hashes preceded by ´sha256//´ and separated by ´;´ When negotiating a TLS or SSL connection, the server sends a certificate indicating its identity. A public key is extracted from this certificate and if it does not exactly match the public key provided to this option, curl will abort the connection before sending or receiving any data. PEM/DER support: 7.39.0: OpenSSL, GnuTLS and GSKit 7.43.0: NSS and wolfSSL 7.47.0: mbedtls sha256 support: 7.44.0: OpenSSL, GnuTLS, NSS and wolfSSL 7.47.0: mbedtls Other SSL backends not supported. If this option is used several times, the last one will be used. ##### --post301 (HTTP) Tells curl to respect RFC 7231/6.4.2 and not convert POST requests into GET requests when following a 301 redirection. The non-RFC behavior is ubiquitous in web browsers, so curl does the conversion by default to maintain consistency. However, a server may require a POST to remain a POST after such a redirection. This option is meaningful only when using -L, --location. See also --post302, --post303 and -L, --location. Added in 7.17.1. ##### --post302 (HTTP) Tells curl to respect RFC 7231/6.4.3 and not convert POST requests into GET requests when following a 302 redirection. The non-RFC behavior is ubiquitous in web browsers, so curl does the conversion by default to maintain consistency. However, a server may require a POST to remain a POST after such a redirection. This option is meaningful only when using -L, --location. See also --post301, --post303 and -L, --location. Added in 7.19.1. ##### --post303 (HTTP) Tells curl to violate RFC 7231/6.4.4 and not convert POST requests into GET requests when following 303 redirections. A server may require a POST to remain a POST after a 303 redirection. This option is meaningful only when using -L, --location. See also --post302, --post301 and -L, --location. Added in 7.26.0. ##### --preproxy [protocol://]host[:port] Use the specified SOCKS proxy before connecting to an HTTP or HTTPS -x, --proxy. In such a case curl first connects to the SOCKS proxy and then connects (through SOCKS) to the HTTP or HTTPS proxy. Hence pre proxy. The pre proxy string should be specified with a protocol:// prefix to specify alternative proxy protocols. Use socks4://, socks4a://, socks5:// or socks5h:// to request the specific SOCKS version to be used. No protocol specified will make curl default to SOCKS4. If the port number is not specified in the proxy string, it is assumed to be 1080. User and password that might be provided in the proxy string are URL decoded by curl. This allows you to pass in special characters such as @ by using %40 or pass in a colon with %3a. If this option is used several times, the last one will be used. Added in 7.52.0. ##### -#, --progress-bar Make curl display transfer progress as a simple progress bar instead of the standard, more informational, meter. This progress bar draws a single line of '#' characters across the screen and shows a percentage if the transfer size is known. For transfers without a known size, there will be space ship (-=o=-) that moves back and forth but only while data is being transferred, with a set of flying hash sign symbols on top. ##### --proto-default <protocol> Tells curl to use protocol for any URL missing a scheme name. Example: curl --proto-default https ftp.mozilla.org An unknown or unsupported protocol causes error CURLE_UNSUPPORTED_PROTOCOL (1). This option does not change the default proxy protocol (http). Without this option curl would make a guess based on the host, see --url for details. Added in 7.45.0. ##### --proto-redir <protocols> Tells curl to limit what protocols it may use on redirect. Protocols denied by --proto are not overridden by this option. See --proto for how protocols are represented. Example, allow only HTTP and HTTPS on redirect: curl --proto-redir -all,http,https http://example.com By default curl will allow HTTP, HTTPS, FTP and FTPS on redirect (7.65.2). Older versions of curl allowed all protocols on redirect except several disabled for security reasons: Since 7.19.4 FILE and SCP are disabled, and since 7.40.0 SMB and SMBS are also disabled. Specifying all or +all enables all protocols on redirect, including those disabled for security. Added in 7.20.2. ##### --proto <protocols> Tells curl to limit what protocols it may use in the transfer. Protocols are evaluated left to right, are comma separated, and are each a protocol name or 'all', optionally prefixed by zero or more modifiers. Available modifiers are: + Permit this protocol in addition to protocols already permitted (this is the default if no modifier is used). ##### - Deny this protocol, removing it from the list of protocols already permitted. = Permit only this protocol (ignoring the list already permitted), though subject to later modification by subsequent entries in the comma separated list. For example: ##### --proto -ftps uses the default protocols, but disables ftps ##### --proto -all,https,+http only enables http and https ##### --proto =http,https also only enables http and https Unknown protocols produce a warning. This allows scripts to safely rely on being able to disable potentially dangerous protocols, without relying upon support for that protocol being built into curl to avoid an error. This option can be used multiple times, in which case the effect is the same as concatenating the protocols into one instance of the option. See also --proto-redir and --proto-default. Added in 7.20.2. ##### --proxy-anyauth Tells curl to pick a suitable authentication method when communicating with the given HTTP proxy. This might cause an extra request/response round-trip. See also -x, --proxy, --proxy-basic and --proxy-digest. Added in 7.13.2. ##### --proxy-basic Tells curl to use HTTP Basic authentication when communicating with the given proxy. Use --basic for enabling HTTP Basic with a remote host. Basic is the default authentication method curl uses with proxies. See also -x, --proxy, --proxy-anyauth and --proxy-digest. ##### --proxy-cacert <file> Same as --cacert but used in HTTPS proxy context. See also --proxy-capath, --cacert, --capath and -x, --proxy. Added in 7.52.0. ##### --proxy-capath <dir> Same as --capath but used in HTTPS proxy context. See also --proxy-cacert, -x, --proxy and --capath. Added in 7.52.0. ##### --proxy-cert-type <type> Same as --cert-type but used in HTTPS proxy context. Added in 7.52.0. ##### --proxy-cert <cert[:passwd]> Same as -E, --cert but used in HTTPS proxy context. Added in 7.52.0. ##### --proxy-ciphers <list> Same as --ciphers but used in HTTPS proxy context. Added in 7.52.0. ##### --proxy-crlfile <file> Same as --crlfile but used in HTTPS proxy context. Added in 7.52.0. ##### --proxy-digest Tells curl to use HTTP Digest authentication when communicating with the given proxy. Use --digest for enabling HTTP Digest with a remote host. See also -x, --proxy, --proxy-anyauth and --proxy-basic. ##### --proxy-header <header/@file> (HTTP) Extra header to include in the request when sending HTTP to a proxy. ##### --proxy-insecure Same as -k, --insecure but used in HTTPS proxy context. Added in 7.52.0. ##### --proxy-key-type <type> Same as --key-type but used in HTTPS proxy context. Added in 7.52.0. ##### --proxy-key <key> Same as --key but used in HTTPS proxy context. ##### --proxy-negotiate Tells curl to use HTTP Negotiate (SPNEGO) authentication when communicating with the given proxy. Use --negotiate for enabling HTTP Negotiate (SPNEGO) with a remote host. See also --proxy-anyauth and --proxy-basic. Added in 7.17.1. ##### --proxy-ntlm Tells curl to use HTTP NTLM authentication when communicating with the given proxy. Use --ntlm for enabling NTLM with a remote host. See also --proxy-negotiate and --proxy-anyauth. ##### --proxy-pass <phrase> Same as --pass but used in HTTPS proxy context. Added in 7.52.0. ##### --proxy-pinnedpubkey <hashes> (TLS) Tells curl to use the specified public key file (or hashes) to verify the proxy. ##### --proxy-service-name <name> This option allows you to change the service name for proxy negotiation. Added in 7.43.0. ##### --proxy-ssl-allow-beast Same as --ssl-allow-beast but used in HTTPS proxy context. Added in 7.52.0. ##### --proxy-ssl-auto-client-cert Same as --ssl-auto-client-cert but used in HTTPS proxy context. Added in 7.77.0. ##### --proxy-tls13-ciphers <ciphersuite list> (TLS) Specifies which cipher suites to use in the connection to your HTTPS proxy when it negotiates TLS 1.3. ##### --proxy-tlsauthtype <type> Same as --tlsauthtype but used in HTTPS proxy context. Added in 7.52.0. ##### --proxy-tlspassword <string> Same as --tlspassword but used in HTTPS proxy context. Added in 7.52.0. ##### --proxy-tlsuser <name> Same as --tlsuser but used in HTTPS proxy context. Added in 7.52.0. ##### --proxy-tlsv1 Same as -1, --tlsv1 but used in HTTPS proxy context. Added in 7.52.0. ##### -U, --proxy-user <user:password> Specify the user name and password to use for proxy authentication. If you use a Windows SSPI-enabled curl binary and do either Negotiate or NTLM authentication then you can tell curl to select the user name and password from your environment by specifying a single colon with this option: "-U :". On systems where it works, curl will hide the given option argument from process listings. This is not enough to protect credentials from possibly getting seen by other users on the same system as they will still be visible for a brief moment before cleared. Such sensitive data should be retrieved from a file instead or similar and never used in clear text in a command line. If this option is used several times, the last one will be used. ##### -x, --proxy [protocol://]host[:port] Use the specified proxy. ##### --proxy1.0 <host[:port]> Use the specified HTTP 1.0 proxy. If the port number is not specified, it is assumed at port 1080. The only difference between this and the HTTP proxy option -x, --proxy, is that attempts to use CONNECT through the proxy will specify an HTTP 1.0 protocol instead of the default HTTP 1.1. ##### -p, --proxytunnel When an HTTP proxy is used -x, --proxy, this option will make curl tunnel through the proxy. The tunnel approach is made with the HTTP proxy CONNECT request and requires that the proxy allows direct connect to the remote port number curl wants to tunnel through to. To suppress proxy CONNECT response headers when curl is set to output headers use --suppress-connect-headers. See also -x, --proxy. ##### --pubkey <key> (SFTP SCP) Public key file name. Allows you to provide your public key in this separate file. If this option is used several times, the last one will be used. (As of 7.39.0, curl attempts to automatically extract the public key from the private key file, so passing this option is generally not required. Note that this public key extraction requires libcurl to be linked against a copy of libssh2 1.2.8 or higher that is itself linked against OpenSSL.) ##### -Q, --quote (FTP SFTP) Send an arbitrary command to the remote FTP or SFTP server. ##### --random-file <file> Specify the path name to file containing what will be considered as random data. The data may be used to seed the random engine for SSL connections. See also the --egd-file option. ##### -r, --range <range> (HTTP FTP SFTP FILE) Retrieve a byte range (i.e. a partial document) from an HTTP/1.1, FTP or SFTP server or a local FILE. Ranges can be specified in a number of ways. 0-499 specifies the first 500 bytes 500-999 specifies the second 500 bytes ##### -500 specifies the last 500 bytes 9500- specifies the bytes from offset 9500 and forward 0-0,-1 specifies the first and last byte only(*)(HTTP) 100-199,500-599 specifies two separate 100-byte ranges(*) (HTTP) (*) = NOTE that this will cause the server to reply with a multipart response, which will be returned as-is by curl! Parsing or otherwise transforming this response is the responsibility of the caller. Only digit characters (0-9) are valid in the 'start' and 'stop' fields of the 'start-stop' range syntax. If a non-digit character is given in the range, the server's response will be unspecified, depending on the server's configuration. You should also be aware that many HTTP/1.1 servers do not have this feature enabled, so that when you attempt to get a range, you'll instead get the whole document. FTP and SFTP range downloads only support the simple 'start-stop' syntax (optionally with one of the numbers omitted). FTP use depends on the extended FTP command SIZE. If this option is used several times, the last one will be used. ##### --raw (HTTP) When used, it disables all internal HTTP decoding of content or transfer encodings and instead makes them passed on unaltered, raw. Added in 7.16.2. ##### -e, --referer <URL> (HTTP) Sends the "Referrer Page" information to the HTTP server. This can also be set with the -H, --header flag of course. When used with -L, --location you can append ";auto" to the -e, --referer URL to make curl automatically set the previous URL when it follows a Location: header. The ";auto" string can be used alone, even if you don't set an initial -e, --referer. If this option is used several times, the last one will be used. See also -A, --user-agent and -H, --header. ##### -J, --remote-header-name (HTTP) This option tells the -O, --remote-name option to use the server-specified Content-Disposition filename instead of extracting a filename from the URL. If the server specifies a file name and a file with that name already exists in the current working directory it will not be overwritten and an error will occur. If the server doesn't specify a file name then this option has no effect. There's no attempt to decode %-sequences (yet) in the provided file name, so this option may provide you with rather unexpected file names. WARNING: Exercise judicious use of this option, especially on Windows. A rogue server could send you the name of a DLL or other file that could possibly be loaded automatically by Windows or some third party software. ##### --remote-name-all This option changes the default action for all given URLs to be dealt with as if -O, --remote-name were used for each one. So if you want to disable that for a specific URL after --remote-name-all has been used, you must use "-o -" or --no-remote-name. Added in 7.19.0. ##### -O, --remote-name Write output to a local file named like the remote file we get. (Only the file part of the remote file is used, the path is cut off.) The file will be saved in the current working directory. If you want the file saved in a different directory, make sure you change the current working directory before invoking curl with this option. The remote file name to use for saving is extracted from the given URL, nothing else, and if it already exists it will be overwritten. If you want the server to be able to choose the file name refer to -J, --remote-header-name which can be used in addition to this option. If the server chooses a file name and that name already exists it will not be overwritten. There is no URL decoding done on the file name. If it has %20 or other URL encoded parts of the name, they will end up as-is as file name. You may use this option as many times as the number of URLs you have. ##### -R, --remote-time When used, this will make curl attempt to figure out the timestamp of the remote file, and if that is available make the local file get that same timestamp. ##### --request-target (HTTP) Tells curl to use an alternative "target" (path) instead of using the path as provided in the URL. Particularly useful when wanting to issue HTTP requests without leading slash or other data that doesn't follow the regular URL pattern, like "OPTIONS *". Added in 7.55.0. ##### -X, --request <command> (HTTP) Specifies a custom request method to use when communicating with the HTTP server. ##### --resolve <[+]host:port:addr[,addr]...> Provide a custom address for a specific host and port pair. ##### --retry-all-errors Retry on any error. ##### --retry-connrefused In addition to the other conditions, consider ECONNREFUSED as a transient error too for --retry. This option is used together with --retry. Added in 7.52.0. ##### --retry-delay <seconds> Make curl sleep this amount of time before each retry when a transfer has failed with a transient error (it changes the default backoff time algorithm between retries). This option is only interesting if --retry is also used. Setting this delay to zero will make curl use the default backoff time. If this option is used several times, the last one will be used. Added in 7.12.3. ##### --retry-max-time <seconds> The retry timer is reset before the first transfer attempt. Retries will be done as usual (see --retry) as long as the timer hasn't reached this given limit. Notice that if the timer hasn't reached the limit, the request will be made and while performing, it may take longer than this given time period. To limit a single request´s maximum time, use -m, --max-time. Set this option to zero to not timeout retries. If this option is used several times, the last one will be used. Added in 7.12.3. ##### --retry <num> If a transient error is returned when curl tries to perform a transfer, it will retry this number of times before giving up. Setting the number to 0 makes curl do no retries (which is the default). Transient error means either: a timeout, an FTP 4xx response code or an HTTP 408, 429, 500, 502, 503 or 504 response code. When curl is about to retry a transfer, it will first wait one second and then for all forthcoming retries it will double the waiting time until it reaches 10 minutes which then will be the delay between the rest of the retries. By using --retry-delay you disable this exponential backoff algorithm. See also --retry-max-time to limit the total time allowed for retries. Since curl 7.66.0, curl will comply with the Retry-After: response header if one was present to know when to issue the next retry. If this option is used several times, the last one will be used. Added in 7.12.3. ##### --sasl-authzid <identity> Use this authorisation identity (authzid), during SASL PLAIN authentication, in addition to the authentication identity (authcid) as specified by -u, --user. If the option isn't specified, the server will derive the authzid from the authcid, but if specified, and depending on the server implementation, it may be used to access another user's inbox, that the user has been granted access to, or a shared mailbox for example. Added in 7.66.0. ##### --sasl-ir Enable initial response in SASL authentication. Added in 7.31.0. ##### --service-name <name> This option allows you to change the service name for SPNEGO. Examples: --negotiate --service-name sockd would use sockd/server-name. Added in 7.43.0. ##### -S, --show-error When used with -s, --silent, it makes curl show an error message if it fails. See also --no-progress-meter. ##### -s, --silent Silent or quiet mode. Don't show progress meter or error messages. Makes Curl mute. It will still output the data you ask for, potentially even to the terminal/stdout unless you redirect it. Use -S, --show-error in addition to this option to disable progress meter but still show error messages. See also -v, --verbose, --stderr and --no-progress-meter. ##### --socks4 <host[:port]> Use the specified SOCKS4 proxy. If the port number is not specified, it is assumed at port 1080. Using this socket type make curl resolve the host name and passing the address on to the proxy. This option overrides any previous use of -x, --proxy, as they are mutually exclusive. Since 7.21.7, this option is superfluous since you can specify a socks4 proxy with -x, --proxy using a socks4:// protocol prefix. Since 7.52.0, --preproxy can be used to specify a SOCKS proxy at the same time -x, --proxy is used with an HTTP/HTTPS proxy. In such a case curl first connects to the SOCKS proxy and then connects (through SOCKS) to the HTTP or HTTPS proxy. If this option is used several times, the last one will be used. Added in 7.15.2. ##### --socks4a <host[:port]> Use the specified SOCKS4a proxy. If the port number is not specified, it is assumed at port 1080. This asks the proxy to resolve the host name. This option overrides any previous use of -x, --proxy, as they are mutually exclusive. Since 7.21.7, this option is superfluous since you can specify a socks4a proxy with -x, --proxy using a socks4a:// protocol prefix. Since 7.52.0, --preproxy can be used to specify a SOCKS proxy at the same time -x, --proxy is used with an HTTP/HTTPS proxy. In such a case curl first connects to the SOCKS proxy and then connects (through SOCKS) to the HTTP or HTTPS proxy. If this option is used several times, the last one will be used. Added in 7.18.0. ##### --socks5-basic Tells curl to use username/password authentication when connecting to a SOCKS5 proxy. The username/password authentication is enabled by default. Use --socks5-gssapi to force GSS-API authentication to SOCKS5 proxies. Added in 7.55.0. ##### --socks5-gssapi-nec As part of the GSS-API negotiation a protection mode is negotiated. RFC 1961 says in section 4.3/4.4 it should be protected, but the NEC reference implementation does not. The option --socks5-gssapi-nec allows the unprotected exchange of the protection mode negotiation. Added in 7.19.4. ##### --socks5-gssapi-service <name> The default service name for a socks server is rcmd/server-fqdn. This option allows you to change it. Examples: --socks5 proxy-name --socks5-gssapi-service sockd would use sockd/proxy-name --socks5 proxy-name --socks5-gssapi-service sockd/real-name would use sockd/real-name for cases where the proxy-name does not match the principal name. Added in 7.19.4. ##### --socks5-gssapi Tells curl to use GSS-API authentication when connecting to a SOCKS5 proxy. The GSS-API authentication is enabled by default (if curl is compiled with GSS-API support). Use --socks5-basic to force username/password authentication to SOCKS5 proxies. Added in 7.55.0. ##### --socks5-hostname <host[:port]> Use the specified SOCKS5 proxy (and let the proxy resolve the host name). If the port number is not specified, it is assumed at port 1080. This option overrides any previous use of -x, --proxy, as they are mutually exclusive. Since 7.21.7, this option is superfluous since you can specify a socks5 hostname proxy with -x, --proxy using a socks5h:// protocol prefix. Since 7.52.0, --preproxy can be used to specify a SOCKS proxy at the same time -x, --proxy is used with an HTTP/HTTPS proxy. In such a case curl first connects to the SOCKS proxy and then connects (through SOCKS) to the HTTP or HTTPS proxy. If this option is used several times, the last one will be used. Added in 7.18.0. ##### --socks5 <host[:port]> Use the specified SOCKS5 proxy - but resolve the host name locally. If the port number is not specified, it is assumed at port 1080. This option overrides any previous use of -x, --proxy, as they are mutually exclusive. Since 7.21.7, this option is superfluous since you can specify a socks5 proxy with -x, --proxy using a socks5:// protocol prefix. Since 7.52.0, --preproxy can be used to specify a SOCKS proxy at the same time -x, --proxy is used with an HTTP/HTTPS proxy. In such a case curl first connects to the SOCKS proxy and then connects (through SOCKS) to the HTTP or HTTPS proxy. If this option is used several times, the last one will be used. This option (as well as --socks4) does not work with IPV6, FTPS or LDAP. Added in 7.18.0. ##### -Y, --speed-limit <speed> If a download is slower than this given speed (in bytes per second) for speed-time seconds it gets aborted. speed-time is set with -y, --speed-time and is 30 if not set. If this option is used several times, the last one will be used. ##### -y, --speed-time <seconds> If a download is slower than speed-limit bytes per second during a speed-time period, the download gets aborted. If speed-time is used, the default speed-limit will be 1 unless set with -Y, --speed-limit. This option controls transfers and thus will not affect slow connects etc. If this is a concern for you, try the --connect-timeout option. If this option is used several times, the last one will be used. ##### --ssl-allow-beast This option tells curl to not work around a security flaw in the SSL3 and TLS1.0 protocols known as BEAST. If this option isn't used, the SSL layer may use workarounds known to cause interoperability problems with some older SSL implementations. WARNING: this option loosens the SSL security, and by using this flag you ask for exactly that. Added in 7.25.0. ##### --ssl-auto-client-cert Tell libcurl to automatically locate and use a client certificate for authentication, when requested by the server. This option is only supported for Schannel (the native Windows SSL library). Prior to 7.77.0 this was the default behavior in libcurl with Schannel. Since the server can request any certificate that supports client authentication in the OS certificate store it could be a privacy violation and unexpected. See also --proxy-ssl-auto-client-cert. Added in 7.77.0. ##### --ssl-no-revoke (Schannel) This option tells curl to disable certificate revocation checks. WARNING: this option loosens the SSL security, and by using this flag you ask for exactly that. Added in 7.44.0. ##### --ssl-reqd (FTP IMAP POP3 SMTP) Require SSL/TLS for the connection. Terminates the connection if the server doesn't support SSL/TLS. This option was formerly known as --ftp-ssl-reqd. Added in 7.20.0. ##### --ssl-revoke-best-effort (Schannel) This option tells curl to ignore certificate revocation checks when they failed due to missing/offline distribution points for the revocation check lists. Added in 7.70.0. ##### --ssl (FTP IMAP POP3 SMTP) Try to use SSL/TLS for the connection. Reverts to a non-secure connection if the server doesn't support SSL/TLS. See also --ftp-ssl-control and --ssl-reqd for different levels of encryption required. This option was formerly known as --ftp-ssl (Added in 7.11.0). That option name can still be used but will be removed in a future version. Added in 7.20.0. ##### -2, --sslv2 (SSL) This option previously asked curl to use SSLv2, but starting in curl 7.77.0 this instruction is ignored. SSLv2 is widely considered insecure (see RFC 6176). See also --http1.1 and --http2. -2, --sslv2 requires that the underlying libcurl was built to support TLS. This option overrides -3, --sslv3 and -1, --tlsv1 and --tlsv1.1 and --tlsv1.2. ##### -3, --sslv3 (SSL) This option previously asked curl to use SSLv3, but starting in curl 7.77.0 this instruction is ignored. SSLv3 is widely considered insecure (see RFC 7568). See also --http1.1 and --http2. -3, --sslv3 requires that the underlying libcurl was built to support TLS. This option overrides -2, --sslv2 and -1, --tlsv1 and --tlsv1.1 and --tlsv1.2. ##### --stderr <file> Redirect all writes to stderr to the specified file instead. If the file name is a plain '-', it is instead written to stdout. If this option is used several times, the last one will be used. See also -v, --verbose and -s, --silent. ##### --styled-output Enables the automatic use of bold font styles when writing HTTP headers to the terminal. Use --no-styled-output to switch them off. Added in 7.61.0. ##### --suppress-connect-headers When -p, --proxytunnel is used and a CONNECT request is made don't output proxy CONNECT response headers. This option is meant to be used with -D, --dump-header or -i, --include which are used to show protocol headers in the output. It has no effect on debug options such as -v, --verbose or --trace, or any statistics. See also -D, --dump-header, -i, --include and -p, --proxytunnel. ##### --tcp-fastopen Enable use of TCP Fast Open (RFC7413). Added in 7.49.0. ##### --tcp-nodelay Turn on the TCP_NODELAY option. See the curl_easy_setopt(3) man page for details about this option. Since 7.50.2, curl sets this option by default and you need to explicitly switch it off if you don't want it on. Added in 7.11.2. ##### -t, --telnet-option <opt=val> Pass options to the telnet protocol. Supported options are: TTYPE=<term> Sets the terminal type. XDISPLOC=<X display> Sets the X display location. NEW_ENV=<var,val> Sets an environment variable. ##### --tftp-blksize <value> (TFTP) Set TFTP BLKSIZE option (must be >512). This is the block size that curl will try to use when transferring data to or from a TFTP server. By default 512 bytes will be used. If this option is used several times, the last one will be used. Added in 7.20.0. ##### --tftp-no-options (TFTP) Tells curl not to send TFTP options requests. This option improves interop with some legacy servers that do not acknowledge or properly implement TFTP options. When this option is used --tftp-blksize is ignored. Added in 7.48.0. ##### -z, --time-cond <time> (HTTP FTP) Request a file that has been modified later than the given time and date, or one that has been modified before that time. The <date expression> can be all sorts of date strings or if it doesn't match any internal ones, it is taken as a filename and tries to get the modification date (mtime) from <file> instead. See the curl_getdate(3) man pages for date expression details. Start the date expression with a dash (-) to make it request for a document that is older than the given date/time, default is a document that is newer than the specified date/time. If this option is used several times, the last one will be used. ##### --tls-max <VERSION> (SSL) VERSION defines maximum supported TLS version. The minimum acceptable version is set by tlsv1.0, tlsv1.1, tlsv1.2 or tlsv1.3. If the connection is done without TLS, this option has no effect. This includes QUIC-using (HTTP/3) transfers. default Use up to recommended TLS version. 1.0 Use up to TLSv1.0. 1.1 Use up to TLSv1.1. 1.2 Use up to TLSv1.2. 1.3 Use up to TLSv1.3. See also --tlsv1.0, --tlsv1.1, --tlsv1.2 and --tlsv1.3. --tls-max requires that the underlying libcurl was built to support TLS. Added in 7.54.0. ##### --tls13-ciphers <ciphersuite list> (TLS) Specifies which cipher suites to use in the connection if it negotiates TLS 1.3. The list of ciphers suites must specify valid ciphers. Read up on TLS 1.3 cipher suite details on this URL: https://curl.se/docs/ssl-ciphers.html This option is currently used only when curl is built to use OpenSSL 1.1.1 or later. If you are using a different SSL backend you can try setting TLS 1.3 cipher suites by using the --ciphers option. If this option is used several times, the last one will be used. ##### --tlsauthtype <type> Set TLS authentication type. Currently, the only supported option is "SRP", for TLS-SRP (RFC 5054). If --tlsuser and --tlspassword are specified but --tlsauthtype is not, then this option defaults to "SRP". This option works only if the underlying libcurl is built with TLS-SRP support, which requires OpenSSL or GnuTLS with TLS-SRP support. Added in 7.21.4. ##### --tlspassword Set password for use with the TLS authentication method specified with --tlsauthtype. Requires that --tlsuser also be set. This doesn't work with TLS 1.3. Added in 7.21.4. ##### --tlsuser <name> Set username for use with the TLS authentication method specified with --tlsauthtype. Requires that --tlspassword also is set. This doesn't work with TLS 1.3. Added in 7.21.4. ##### --tlsv1.0 (TLS) Forces curl to use TLS version 1.0 or later when connecting to a remote TLS server. In old versions of curl this option was documented to allow _only_ TLS 1.0, but behavior was inconsistent depending on the TLS library. Use --tls-max if you want to set a maximum TLS version. Added in 7.34.0. ##### --tlsv1.1 (TLS) Forces curl to use TLS version 1.1 or later when connecting to a remote TLS server. In old versions of curl this option was documented to allow _only_ TLS 1.1, but behavior was inconsistent depending on the TLS library. Use --tls-max if you want to set a maximum TLS version. Added in 7.34.0. ##### --tlsv1.2 (TLS) Forces curl to use TLS version 1.2 or later when connecting to a remote TLS server. In old versions of curl this option was documented to allow _only_ TLS 1.2, but behavior was inconsistent depending on the TLS library. Use --tls-max if you want to set a maximum TLS version. Added in 7.34.0. ##### --tlsv1.3 (TLS) Forces curl to use TLS version 1.3 or later when connecting to a remote TLS server. If the connection is done without TLS, this option has no effect. This includes QUIC-using (HTTP/3) transfers. Note that TLS 1.3 is not supported by all TLS backends. Added in 7.52.0. ##### -1, --tlsv1 (SSL) Tells curl to use at least TLS version 1.x when negotiating with a remote TLS server. That means TLS version 1.0 or higher See also --http1.1 and --http2. -1, --tlsv1 requires that the underlying libcurl was built to support TLS. This option overrides --tlsv1.1 and --tlsv1.2 and --tlsv1.3. ##### --tr-encoding (HTTP) Request a compressed Transfer-Encoding response using one of the algorithms curl supports, and uncompress the data while receiving it. Added in 7.21.6. ##### --trace-ascii <file> Enables a full trace dump of all incoming and outgoing data, including descriptive information, to the given output file. Use "-" as filename to have the output sent to stdout. This is very similar to --trace, but leaves out the hex part and only shows the ASCII part of the dump. It makes smaller output that might be easier to read for untrained humans. If this option is used several times, the last one will be used. This option overrides --trace and -v, --verbose. ##### --trace-time Prepends a time stamp to each trace or verbose line that curl displays. Added in 7.14.0. ##### --trace <file> Enables a full trace dump of all incoming and outgoing data, including descriptive information, to the given output file. Use "-" as filename to have the output sent to stdout. Use "%" as filename to have the output sent to stderr. If this option is used several times, the last one will be used. This option overrides -v, --verbose and --trace-ascii. ##### --unix-socket <path> (HTTP) Connect through this Unix domain socket, instead of using the network. Added in 7.40.0. ##### -T, --upload-file <file> This transfers the specified local file to the remote URL. ##### --url <url> Specify a URL to fetch. This option is mostly handy when you want to specify URL(s) in a config file. If the given URL is missing a scheme name (such as "http://" or "ftp://" etc) then curl will make a guess based on the host. If the outermost sub-domain name matches DICT, FTP, IMAP, LDAP, POP3 or SMTP then that protocol will be used, otherwise HTTP will be used. Since 7.45.0 guessing can be disabled by setting a default protocol, see --proto-default for details. This option may be used any number of times. To control where this URL is written, use the -o, --output or the -O, --remote-name options. Warning: On Windows, particular file:// accesses can be converted to network accesses by the operating system. Beware! ##### -B, --use-ascii (FTP LDAP) Enable ASCII transfer. For FTP, this can also be enforced by using a URL that ends with ";type=A". This option causes data sent to stdout to be in text mode for win32 systems. ##### -A, --user-agent <name> (HTTP) Specify the User-Agent string to send to the HTTP server. To encode blanks in the string, surround the string with single quote marks. This header can also be set with the -H, --header or the --proxy-header options. If you give an empty argument to -A, --user-agent (""), it will remove the header completely from the request. If you prefer a blank header, you can set it to a single space (" "). If this option is used several times, the last one will be used. ##### -u, --user <user:password> Specify the user name and password to use for server authentication. ##### -v, --verbose Makes curl verbose during the operation. ##### -V, --version Displays information about curl and the libcurl version it uses. ##### -w, --write-out <format> Make curl display information on stdout after a completed transfer. ##### --xattr When saving output to a file, this option tells curl to store certain file metadata in extended file attributes. Currently, the URL is stored in the xdg.origin.url attribute and, for HTTP, the content type is stored in the mime_type attribute. If the file system does not support extended attributes, a warning is issued.
cr54

How to use the chown command in Linux

The chown command changes user ownership of a file, directory, or link in Linux. The command chown, an abbreviation of change owner, is used on Unix and Unix-like operating systems to change the owner of file system files, directories. change the ownership of a file named filename to a new owner named install chown install filename chown --help chown --version ls -l -rw-r--r-- 12 root root |[-][-][-]- [--] [--] | | | | | | | | | | | | | +--> 7. Group | | | | | +-------> 6. Owner | | | | +-----------> 5. Alternate Access Method | | | +-------------> 4. Others Permissions | | +----------------> 3. Group Permissions | +-------------------> 2. Owner Permissions +---------------------> 1. File Type Change Ownership of Multiple Linux Files root will be the new owner of files chown root file1 file2 With UID chown 1002 filename changes the group of the file chown :group filename.md chown: invalid group: ‘:group’ Check Owner and Group Before Making Changes. verified the ownership and the group chown --from=root:group user:group2 filename.md subdirectories chown -R user:group dirname user group The syntax for the chown command is as follows: Usage: chown [OPTION]... [OWNER][:[GROUP]] FILE... chown [OPTION]... --reference=RFILE FILE... OPTIONS: ##### --reference change the owner and group of each FILE to those of RFILE. ##### -c, --changes like verbose but report only when a change is made ##### -f, --silent, --quiet suppress most error messages ##### -v, --verbose output a diagnostic for every file processed ##### --dereference affect the referent of each symbolic link (this is the default), rather than the symbolic link itself ##### -h, --no-dereference affect symbolic links instead of any referenced file (useful only on systems that can change the ownership of a symlink) ##### --from=CURRENT_OWNER:CURRENT_GROUP change the owner and/or group of each file only if its current owner and/or group match those specified here. Either may be omitted, in which case a match is not required for the omitted attribute ##### --no-preserve-root do not treat '/' specially (the default) ##### --preserve-root fail to operate recursively on '/' ##### --reference=RFILE use RFILE's owner and group rather than specifying OWNER:GROUP values ##### -R, --recursive operate on files and directories recursively ##### -H if a command line argument is a symbolic link to a directory, traverse it ##### -L traverse every symbolic link to a directory encountered ##### -P do not traverse any symbolic links (default) ##### --help display this help and exit ##### --version output version information and exit
cr53

How to use the chmod command in Linux

chmod is the command and system call used to change the access permissions of file system objects (files and directories) sometimes known as modes. The chmod command sets the file permissions flags on a file or folder. The flags define who can read, write to or execute the file. When you list files with the -l (long format) option you’ll see a string of characters that look like Add write permission (w) to the Group's (g) access modes of a directory chmod g+w dirname ls -l -rw-r--r-- 12 root root |[-][-][-]- [--] [--] | | | | | | | | | | | | | +--> 7. Group | | | | | +-------> 6. Owner | | | | +-----------> 5. Alternate Access Method | | | +-------------> 4. Others Permissions | | +----------------> 3. Group Permissions | +-------------------> 2. Owner Permissions +---------------------> 1. File Type How to change directory permissions in Linux add permissions chmod +rwx filename remove permissions chmod -rwx filename.md cat filename.md cat: filename.md: Permission denied To change directory permissions for everyone, use “u” for users, “g” for group, “o” for others, and “ugo” or “a” (for all). give read, write, and execute to everyone chmod ugo+rwx foldername chmod u=rw,og=r filename.md include files in subdirectories chmod -R o-r *.page -rw-r--r-- The syntax for the chmod command is as follows: Usage: chmod [OPTION]... MODE[,MODE]... FILE... chmod [OPTION]... OCTAL-MODE FILE... chmod [OPTION]... --reference=RFILE FILE... OPTIONS: ##### --reference change the mode of each FILE to that of RFILE. ##### -c, --changes like verbose but report only when a change is made ##### -f, --silent, --quiet suppress most error messages ##### -v, --verbose output a diagnostic for every file processed ##### --no-preserve-root do not treat '/' specially (the default) ##### --preserve-root fail to operate recursively on '/' ##### --reference=RFILE use RFILE's mode instead of MODE values ##### -R, --recursive change files and directories recursively ##### --help display this help and exit ##### --version output version information and exit ######flags ##### u The file owner. ##### g The users who are members of the group. ##### o All other users. ##### a All users, identical to ugo.
cr52

How to use the cd command in Linux

The cd (“change directory”) command is used to change the current working directory in Linux and other Unix-like operating systems. The cd command, also known as chdir (change directory), is a command-line shell command used to change the current working directory in various operating systems. It can be used in shell scripts and batch files. Change from current directory to /usr/local/bin. Change cd /usr/local/bin pwd /usr/local/bin Switch back to previous directory where you working earlier Switch back cd - Change Current directory to parent directory. cd .. Show last working directory from where we moved (use ‘–‘ switch) as shown. cd -- Move to users home directory from anywhere. cd ~ Change to a directory containing white spaces. cd test\ installmd/ /root The syntax for the cd command is as follows: Usage: cd [-L|[-P [-e]]] [dir] OPTIONS: ##### -L Follow symbolic links . By default, cd behaves as if the -L option is specified. ##### -P Don’t follow symbolic links. In other words, when this option is specified, and you try to navigate to a symlink that points to a directory, the cd will change into the directory.
cr51

How to use the cat command in Linux

Concatenate FILE(s), or standard input, to standard output. The cat command (short for “concatenate”) lists the contents of files to the terminal window. This is faster than opening the file in an editor, and there’s no chance you can accidentally alter the file. Display Contents of File passwd file cat /etc/passwd View Contents of Multiple Files in terminal cat test test1 cat --help root:x:0:0:root:/root:/bin/bash bin:x:1:1:bin:/bin:/sbin/nologin Display Contents of File. Awaits input from user, type desired text and press CTRL+D (hold down Ctrl Key and type ‘d‘) to exit. The text will be written in test2 file. test file cat >test cat test ssss ssss With files longer than the number of lines in your terminal window, the text will whip past too fast for you to read. You can pipe the output from cat through less to make the process more manageable. Type q to quit from less. less cat .bashrc | less # User specific aliases and functions alias rm='rm -i' ...... Display Line Numbers in File cat -n .bashrc Display Tab separated Lines in File cat -T .bashrc Sorting Contents cat test test1 test2 | sort > test4 The syntax for the cat command is as follows: Usage: cat [OPTION]... [FILE]... OPTIONS: ##### -A, --show-all equivalent to -vET ##### -b, --number-nonblank number nonempty output lines, overrides -n ##### -e equivalent to -vE ##### -E, --show-ends display $ at end of each line ##### -n, --number number all output lines ##### -s, --squeeze-blank suppress repeated empty output lines ##### -t equivalent to -vT ##### -T, --show-tabs display TAB characters as ^I ##### -u (ignored) ##### -v, --show-nonprinting use ^ and M- notation, except for LFD and TAB ##### --help display this help and exit ##### --version output version information and exit
cr50

How to use the alias command in Linux

Alias command instructs the shell to replace one string with another string while executing the commands. The alias command lets you give your own name to a command or sequence of commands. You can then type your short name, and the shell will execute the command or sequence of commands for you. List Currently Defined Aliases in Linux List alias alias cp='cp -i' alias egrep='egrep --color=auto' alias fgrep='fgrep --color=auto' alias grep='grep --color=auto' alias l.='ls -d .* --color=auto' alias ll='ls -l --color=auto' alias ls='ls --color=auto' alias mv='mv -i' alias rm='rm -i' alias which='alias | /usr/bin/which --tty-only --read-alias --show-dot --show-tilde' Creating a Linux alias is very easy. You can either enter them at the command line as you're working, or more likely, you'll put them in one of your startup files, like your .bashrc file, so they will be available every time you log in. l alias l="ls -al" l Jul 1 13:28 .bash_history Dec 29 2013 .bash_logout GIT alias gpom="git push origin master" alias gs="git status" alias gb="git branch" alias gco="git checkout" To remove an alias added via the command line can be unaliased using unalias command. unalias alias_name unalias l unalias -a [remove all alias] -bash: l: command not found The syntax for the alias command is as follows: Usage: alias [-p] [name[=value] ... ] OPTIONS: ##### -p This option prints all the defined aliases is resuable format.
cr49

How to use the ls command in Linux

List Files and Directories. The ls command lists files and directories within the file system, and shows detailed information about them. It is a part of the GNU core utilities package which is installed on all Linux distributions. List all files, including important dotfiles (hidden files), use: all files ls -a To show a long format list of files ls -al -rw-------. 1 root root 1206 Jun 28 2019 anaconda-ks.cfg -rw------- 1 root root 11467 May 5 16:13 .bash_history -rw-r--r--. 1 root root 18 Jun 13 2019 .bash_logout print sizes like 1K 234M 2G etc. ls -lh sort by time, newest first ls -lht sort them by size ls -lhaS drwxr-xr-x 840 root root 26K Nov 25 23:43 node_modules -rw-r--r-- 1 root root 1.1M Jun 8 23:46 package-lock.json The syntax for the ls command is as follows: Usage: ls [OPTIONS] [FILES] OPTIONS: ##### -a, --all do not ignore entries starting with . ##### -A, --almost-all do not list implied . and .. ##### --author with -l, print the author of each file ##### -b, --escape print C-style escapes for nongraphic characters ##### --block-size=SIZE with -l, scale sizes by SIZE when printing them; e.g., '--block-size=M'; see SIZE format below ##### -B, --ignore-backups do not list implied entries ending with ~ ##### -c with -lt: sort by, and show, ctime (time of last modifica‐ tion of file status information); with -l: show ctime and sort by name; otherwise: sort by ctime, newest first ##### -C list entries by columns ##### --color[=WHEN] colorize the output; WHEN can be 'always' (default if omit‐ ted), 'auto', or 'never'; more info below ##### -d, --directory list directories themselves, not their contents ##### -D, --dired generate output designed for Emacs' dired mode ##### -f do not sort, enable -aU, disable -ls --color ##### -F, --classify append indicator (one of */=>@|) to entries ##### --file-type likewise, except do not append '*' ##### --format=WORD across -x, commas -m, horizontal -x, long -l, single-column -1, verbose -l, vertical -C ##### --full-time like -l --time-style=full-iso ##### -g like -l, but do not list owner ##### --group-directories-first group directories before files; can be augmented with a --sort option, but any use of --sort=none (-U) disables grouping ##### -G, --no-group in a long listing, don't print group names ##### -h, --human-readable with -l and -s, print sizes like 1K 234M 2G etc. ##### --si likewise, but use powers of 1000 not 1024 ##### -H, --dereference-command-line follow symbolic links listed on the command line ##### --dereference-command-line-symlink-to-dir follow each command line symbolic link that points to a directory ##### --hide=PATTERN do not list implied entries matching shell PATTERN (over‐ ridden by -a or -A) ##### --hyperlink[=WHEN] hyperlink file names; WHEN can be 'always' (default if omitted), 'auto', or 'never' ##### --indicator-style=WORD append indicator with style WORD to entry names: none (default), slash (-p), file-type (--file-type), classify (-F) ##### -i, --inode print the index number of each file ##### -I, --ignore=PATTERN do not list implied entries matching shell PATTERN ##### -k, --kibibytes default to 1024-byte blocks for disk usage; used only with -s and per directory totals ##### -l use a long listing format ##### -L, --dereference when showing file information for a symbolic link, show information for the file the link references rather than for the link itself ##### -m fill width with a comma separated list of entries ##### -n, --numeric-uid-gid like -l, but list numeric user and group IDs ##### -N, --literal print entry names without quoting ##### -o like -l, but do not list group information ##### -p, --indicator-style=slash append / indicator to directories ##### -q, --hide-control-chars print ? instead of nongraphic characters ##### --show-control-chars show nongraphic characters as-is (the default, unless pro‐ gram is 'ls' and output is a terminal) ##### -Q, --quote-name enclose entry names in double quotes ##### --quoting-style=WORD use quoting style WORD for entry names: literal, locale, shell, shell-always, shell-escape, shell-escape-always, c, escape (overrides QUOTING_STYLE environment variable) ##### -r, --reverse reverse order while sorting ##### -R, --recursive list subdirectories recursively ##### -s, --size print the allocated size of each file, in blocks ##### -S sort by file size, largest first ##### --sort=WORD sort by WORD instead of name: none (-U), size (-S), time (-t), version (-v), extension (-X) ##### --time=WORD change the default of using modification times; access time (-u): atime, access, use; change time (-c): ctime, status; birth time: birth, creation; with -l, WORD determines which time to show; with --sort=time, sort by WORD (newest first) ##### --time-style=TIME_STYLE time/date format with -l; see TIME_STYLE below ##### -t sort by time, newest first; see --time ##### -T, --tabsize=COLS assume tab stops at each COLS instead of 8 ##### -u with -lt: sort by, and show, access time; with -l: show access time and sort by name; otherwise: sort by access time, newest first ##### -U do not sort; list entries in directory order ##### -v natural sort of (version) numbers within text ##### -w, --width=COLS set output width to COLS. 0 means no limit ##### -x list entries by lines instead of by columns ##### -X sort alphabetically by entry extension ##### -Z, --context print any security context of each file ##### -l list one file per line. Avoid '\n' with -q or -b ##### --help display this help and exit ##### --version output version information and exit
cr48

How to use the go mod command line

Go mod provides access to operations on modules. Initialize new module in current directory. If you plan to publish your module for others to use, the module path must be a location from which Go tools can download your module. go mod init example.com/hello go: creating new go.mod: module example.com/hello Usage: go mod <command> [arguments] Commands: ##### download download modules to local cache ##### edit edit go.mod from tools or scripts ##### graph print module requirement graph ##### init initialize new module in current directory ##### tidy add missing and remove unused modules ##### vendor make vendored copy of dependencies ##### verify verify dependencies have expected content ##### why explain why packages or modules are needed Download modules to local cache The go command will automatically download modules as needed during ordinary execution. The "go mod download" command is useful mainly for pre-filling the local cache or to compute the answers for a Go module proxy. Usage: go mod download [-x] [-json] [modules] Options: ##### -x The -x flag causes download to print the commands download executes. Edit go.mod from tools or scripts Edit provides a command-line interface for editing go.mod, for use primarily by tools or scripts. It reads only go.mod; it does not look up information about the modules involved. By default, edit reads and writes the go.mod file of the main module, but a different target file can be specified after the editing flags. Usage: go mod edit [editing flags] [go.mod] flags: ##### -fmt The -fmt flag reformats the go.mod file without making other changes. This reformatting is also implied by any other modifications that use or rewrite the go.mod file. The only time this flag is needed is if no other flags are specified, as in 'go mod edit -fmt'. ##### -module The -module flag changes the module's path (the go.mod file's module line). ##### -require The -require=path@version and -droprequire=path flags add and drop a requirement on the given module path and version. Note that -require overrides any existing requirements on path. These flags are mainly for tools that understand the module graph. Users should prefer 'go get path@version' or 'go get path@none', which make other go.mod adjustments as needed to satisfy constraints imposed by other modules. ##### -exclude=path@version and -dropexclude=path@version The -exclude=path@version and -dropexclude=path@version flags add and drop an exclusion for the given module path and version. Note that -exclude=path@version is a no-op if that exclusion already exists. ##### -replace=old[@v]=new[@v] The -replace=old[@v]=new[@v] flag adds a replacement of the given module path and version pair. If the @v in old@v is omitted, a replacement without a version on the left side is added, which applies to all versions of the old module path. If the @v in new@v is omitted, the new path should be a local module root directory, not a module path. Note that -replace overrides any redundant replacements for old[@v], so omitting @v will drop existing replacements for specific versions. ##### -dropreplace=old[@v] The -dropreplace=old[@v] flag drops a replacement of the given module path and version pair. If the @v is omitted, a replacement without a version on the left side is dropped. ##### -retract=version and -dropretract=version The -retract=version and -dropretract=version flags add and drop a retraction on the given version. The version may be a single version like "v1.2.3" or a closed interval like "[v1.1.0,v1.1.9]". Note that -retract=version is a no-op if that retraction already exists. ##### -require, -droprequire, -exclude, -dropexclude, -replace, -dropreplace, -retract, and -dropretract editing The -require, -droprequire, -exclude, -dropexclude, -replace, -dropreplace, -retract, and -dropretract editing flags may be repeated, and the changes are applied in the order given. ##### -go=version The -go=version flag sets the expected Go language version. ##### -print The -print flag prints the final go.mod in its text format instead of writing it back to go.mod. ##### -json The -json flag prints the final go.mod file in JSON format instead of writing it back to go.mod. The JSON output corresponds to these Go types: Print module requirement graph Graph prints the module requirement graph (with replacements applied) in text form. Each line in the output has two space-separated fields: a module and one of its requirements. Each module is identified as a string of the form path@version, except for the main module, which has no @version suffix. Usage: go mod graph ##### go mod graph golang.org/x/[email protected] golang.org/x/[email protected] golang.org/x/[email protected] golang.org/x/[email protected] golang.org/x/[email protected] golang.org/x/[email protected] Initialize new module in current directory Init initializes and writes a new go.mod file in the current directory, in effect creating a new module rooted at the current directory. The go.mod file must not already exist. Usage: go mod init [module] ##### Add missing and remove unused modules Tidy makes sure go.mod matches the source code in the module. It adds any missing modules necessary to build the current module's packages and dependencies, and it removes unused modules that don't provide any relevant packages. It also adds any missing entries to go.sum and removes any unnecessary ones. Usage: go mod tidy [-e] [-v] flags: ##### -v The -v flag causes tidy to print information about removed modules to standard error. ##### -e The -e flag causes tidy to attempt to proceed despite errors encountered while loading packages. Make vendored copy of dependencies Vendor resets the main module's vendor directory to include all packages needed to build and test all the main module's packages. It does not include test code for vendored packages. Usage: go mod vendor [-e] [-v] flags: ##### -v The -v flag causes vendor to print the names of vendored modules and packages to standard error. ##### -e The -e flag causes vendor to attempt to proceed despite errors encountered while loading packages. Verify dependencies have expected content Verify checks that the dependencies of the current module, which are stored in a local downloaded source cache, have not been modified since being downloaded. If all the modules are unmodified, verify prints "all modules verified." Otherwise it reports which modules have been changed and causes 'go mod' to exit with a non-zero status. Usage: go mod verify ##### Explain why packages or modules are needed Why shows a shortest path in the import graph from the main module to each of the listed packages. Usage: go mod why [-m] [-vendor] packages... flags: ##### -m If the -m flag is given, why treats the arguments as a list of modules and finds a path to any package in each of the modules. ##### -vendor The -vendor flag causes why to exclude tests of dependencies.
cr47

How to use the go doc command line

Show documentation for package or symbol. Doc prints the documentation comments associated with the item identified by its arguments (a package, const, func, type, var, method, or struct field) followed by a one-line summary of each of the first-level items "under" that item (package-level declarations for a package, methods for a type, etc.). Show documentation for current package. go doc Show package docs for the doc command. go doc cmd/doc Show documentation for the encoding/json package. go doc encoding/json package json // import "encoding/json" Package json implements encoding and decoding of JSON as defined in RFC 7159. The mapping between JSON and Go values is described in the documentation for the Marshal and Unmarshal functions. The go doc Commands and Options Usage: go <command> [arguments] Flags: ##### -all Show all the documentation for the package. ##### -c Respect case when matching symbols. ##### -cmd Treat a command (package main) like a regular package. Otherwise package main's exported symbols are hidden when showing the package's top-level documentation. ##### -short One-line representation for each symbol. ##### -src Show the full source code for the symbol. This will display the full Go source of its declaration and definition, such as a function definition (including the body), type declaration or enclosing const block. The output may therefore include unexported details. ##### -u Show documentation for unexported as well as exported symbols, methods, and fields.
cr46

How to use the go clean command line

Remove object files and cached files. Clean removes object files from package source directories. The go command builds most objects in a temporary directory, so go clean is mainly concerned with object files left by other tools or by manual invocations of go build. clean go clean go mod clean cache go clean --modcache The go clean Commands and Options Usage: go clean [clean flags] [build flags] [packages] Clean flags: ##### -i The -i flag causes clean to remove the corresponding installed archive or binary (what 'go install' would create). ##### -n The -n flag causes clean to print the remove commands it would execute, but not run them. ##### -r The -r flag causes clean to be applied recursively to all the dependencies of the packages named by the import paths. ##### -x The -x flag causes clean to print remove commands as it executes them. ##### -cache The -cache flag causes clean to remove the entire go build cache. ##### -testcache The -testcache flag causes clean to expire all test results in the go build cache. ##### -modcache The -modcache flag causes clean to remove the entire module download cache, including unpacked source code of versioned dependencies. If a package argument is given or the -i or -r flag is set, clean removes the following files from each of the source directories corresponding to the import paths: ##### _obj/ old object directory, left from Makefiles ##### _test/ old test directory, left from Makefiles ##### _testmain.go old gotest file, left from Makefiles ##### test.out old test log, left from Makefiles ##### build.out old test log, left from Makefiles ##### *.[568ao] object files, left from Makefiles ##### DIR(.exe) from go build ##### DIR.test(.exe) from go test -c ##### MAINFILE(.exe) from go build MAINFILE.go ##### *.so from SWIG
cr45

How to use the go build command line

compile packages and dependencies. Build compiles the packages named by the import paths, along with their dependencies, but it does not install the results. Build go build go build hello.go ls -lh 1.9M Jun 29 17:38 hello 74B Jun 29 17:37 hello.go The build flags are shared by the build, clean, get, install, list, run, and test commands: The go build Commands and Options Usage: go build [-o output] [build flags] [packages] Build flags: ##### -a force rebuilding of packages that are already up-to-date. ##### -n print the commands but do not run them. ##### -p n the number of programs, such as build commands or test binaries, that can be run in parallel. The default is the number of CPUs available. ##### -race enable data race detection. Supported only on linux/amd64, freebsd/amd64, darwin/amd64, windows/amd64, linux/ppc64le and linux/arm64 (only for 48-bit VMA). ##### -msan enable interoperation with memory sanitizer. Supported only on linux/amd64, linux/arm64 and only with Clang/LLVM as the host C compiler. On linux/arm64, pie build mode will be used. ##### -v print the names of packages as they are compiled. ##### -work print the name of the temporary work directory and do not delete it when exiting. ##### -x print the commands. ##### -asmflags '[pattern=]arg list' arguments to pass on each go tool asm invocation. ##### -buildmode mode build mode to use. See 'go help buildmode' for more. ##### -compiler name name of compiler to use, as in runtime.Compiler (gccgo or gc). ##### -gccgoflags '[pattern=]arg list' arguments to pass on each gccgo compiler/linker invocation. ##### -gcflags '[pattern=]arg list' arguments to pass on each go tool compile invocation. ##### -installsuffix suffix a suffix to use in the name of the package installation directory, in order to keep output separate from default builds. If using the -race flag, the install suffix is automatically set to race or, if set explicitly, has _race appended to it. Likewise for the -msan flag. Using a -buildmode option that requires non-default compile flags has a similar effect. ##### -ldflags '[pattern=]arg list' arguments to pass on each go tool link invocation. ##### -linkshared build code that will be linked against shared libraries previously created with -buildmode=shared. ##### -mod mode module download mode to use: readonly, vendor, or mod. By default, if a vendor directory is present and the go version in go.mod is 1.14 or higher, the go command acts as if -mod=vendor were set. Otherwise, the go command acts as if -mod=readonly were set. ##### -modcacherw leave newly-created directories in the module cache read-write instead of making them read-only. ##### -modfile file in module aware mode, read (and possibly write) an alternate go.mod file instead of the one in the module root directory. A file named "go.mod" must still be present in order to determine the module root directory, but it is not accessed. When -modfile is specified, an alternate go.sum file is also used: its path is derived from the -modfile flag by trimming the ".mod" extension and appending ".sum". ##### -overlay file read a JSON config file that provides an overlay for build operations. The file is a JSON struct with a single field, named 'Replace', that maps each disk file path (a string) to its backing file path, so that a build will run as if the disk file path exists with the contents given by the backing file paths, or as if the disk file path does not exist if its backing file path is empty. Support for the -overlay flag has some limitations:importantly, cgo files included from outside the include path must be in the same directory as the Go package they are included from, and overlays will not appear when binaries and tests are run through go run and go test respectively. ##### -pkgdir dir install and load all packages from dir instead of the usual locations. For example, when building with a non-standard configuration, use -pkgdir to keep generated packages in a separate location. ##### -tags tag,list a comma-separated list of build tags to consider satisfied during the build. For more information about build tags, see the description of build constraints in the documentation for the go/build package. (Earlier versions of Go used a space-separated list, and that form is deprecated but still recognized.) ##### -trimpath remove all file system paths from the resulting executable. Instead of absolute file system paths, the recorded file names will begin with either "go" (for the standard library), or a module path@version (when using modules), or a plain import path (when using GOPATH). ##### -toolexec 'cmd args' a program to use to invoke toolchain programs like vet and asm. For example, instead of running asm, the go command will run 'cmd args /path/to/asm <arguments for asm>'. Compile and run Go program Run compiles and runs the named main Go package. Typically the package is specified as a list of .go source files from a single directory, but it may also be an import path, file system path, or pattern matching a single known package, as in 'go run .' or 'go run my/cmd'. Usage: go run [build flags] [-exec xprog] package [arguments...] flags: ##### -exec If the -exec flag is given, 'go run' invokes the binary using xprog: Run go run . go run hello.go Hello, World! Test packages 'Go test' automates testing the packages named by the import paths. It prints a summary of the test results in the format: Usage: go test [build/test flags] [packages] [build/test flags & test binary flags] In addition to the build flags: ##### -args Pass the remainder of the command line (everything after -args) to the test binary, uninterpreted and unchanged. Because this flag consumes the remainder of the command line, the package list (if present) must appear before this flag. ##### -c Compile the test binary to pkg.test but do not run it (where pkg is the last element of the package's import path). The file name can be changed with the -o flag. ##### -exec xprog Run the test binary using xprog. The behavior is the same as in 'go run'. See 'go help run' for details. ##### -i Install packages that are dependencies of the test. Do not run the test. The -i flag is deprecated. Compiled packages are cached automatically. ##### -json Convert test output to JSON suitable for automated processing. See 'go doc test2json' for the encoding details. ##### -o file Compile the test binary to the named file. The test still runs (unless -c or -i is specified).
cr44

How to use the go command line

Go is a tool for managing Go source code. Go is an open source programming language that makes it easy to build simple, reliable, and efficient software. Go install. 1. Extract the archive you downloaded into /usr/local, creating a Go tree in /usr/local/go. rm -rf /usr/local/go && tar -C /usr/local -xzf go1.16.5.linux-amd64.tar.gz 2. Add /usr/local/go/bin to the PATH environment variable. export PATH=$PATH:/usr/local/go/bin vi ~/.profile source ~/.profile 3. Verify that you've installed Go by opening a command prompt and typing the following command: go version go version go1.16.5 darwin/amd64 Get started with Hello, World. go mod init example.com/hello go run . go: creating new go.mod: module example.com/hello Hello, World! Use "go help <command>" for more information about a command. Help go help build go help run usage: go run [build flags] [-exec xprog] package [arguments...] Run compiles and runs the named main Go package. ...... The go Help Commands and Options Usage: go <command> [arguments] Commands: ##### bug start a bug report ##### build compile packages and dependencies ##### clean remove object files and cached files ##### doc show documentation for package or symbol ##### env print Go environment information ##### fix update packages to use new APIs ##### fmt gofmt (reformat) package sources ##### generate generate Go files by processing source ##### get add dependencies to current module and install them ##### install compile and install packages and dependencies ##### list list packages or modules ##### mod module maintenance ##### run compile and run Go program ##### test test packages ##### tool run specified go tool ##### version print Go version ##### vet report likely mistakes in packages ########Additional help topics: ##### buildconstraint build constraints ##### buildmode build modes ##### c calling between Go and C ##### cache build and test caching ##### environment environment variables ##### filetype file types ##### go.mod the go.mod file ##### gopath GOPATH environment variable ##### gopath-get legacy GOPATH go get ##### goproxy module proxy protocol ##### importpath import path syntax ##### modules modules, module versions, and more ##### module-get module-aware go get ##### module-auth module authentication using go.sum ##### packages package lists and patterns ##### private configuration for downloading non-public code ##### testflag testing flags ##### testfunc testing functions ##### vcs controlling version control with GOVCS Print Go version Usage: go version [-m] [-v] [file ...] flags: ##### -v The -v flag causes it to report unrecognized files. ##### -m The -m flag causes go version to print each executable's embedded module version information, when available. In the output, the module information consists of multiple lines following the version line, each indented by a leading tab character. Report likely mistakes in packages Usage: go vet [-n] [-x] [-vettool prog] [build flags] [vet flags] [packages] flags: ##### -n The -n flag prints commands that would be executed. The -x flag prints commands as they are executed. ##### -vettool=prog The -vettool=prog flag selects a different analysis tool with alternative or additional checks. For example, the 'shadow' analyzer can be built and run using these commands: Print Go environment information Usage: go env [-json] [-u] [-w] [var ...] flags: ##### -json The -json flag prints the environment in JSON format instead of as a shell script. ##### -u The -u flag requires one or more arguments and unsets the default setting for the named environment variables, if one has been set with 'go env -w'. ##### -w The -w flag requires one or more arguments of the form NAME=VALUE and changes the default settings of the named environment variables to the given values. Update packages to use new APIs Fix runs the Go fix command on the packages named by the import paths. Usage: go fix [packages] ##### Gofmt (reformat) package sources Usage: go fmt [-n] [-x] [packages] flags: ##### -n The -n flag prints commands that would be executed. The -x flag prints commands as they are executed. ##### -mod The -mod flag's value sets which module download mode to use: readonly or vendor. See 'go help modules' for more. Generate Go files by processing source Generate runs commands described by directives within existing files. Those commands can run any process but the intent is to create or update Go source files. Usage: go generate [-run regexp] [-n] [-v] [-x] [build flags] [file.go... | packages] flags: ##### -run="" if non-empty, specifies a regular expression to select directives whose full original source text (excluding any trailing spaces and final newline) matches the expression. ##### -v The -v flag prints the names of packages and files as they are processed. ##### -n The -n flag prints commands that would be executed. ##### -x The -x flag prints commands as they are executed.
cr43

How to use the Angular CLI update command line

Updates your application and its dependencies. Perform a basic update to the current stable release of the core framework and CLI by running the following command. ng update ng update @angular/cli @angular/core The installed local Angular CLI version is older than the latest stable version. Installing a temporary version to perform the update. Installing packages for tooling via npm. Installed packages for tooling via npm. Using package manager: 'npm' Collecting installed dependencies... Found 28 dependencies. Fetching dependency metadata from registry... For example, use the following command to take the latest 10.x.x version and use that to update. ng update ng update @angular/cli@^10 @angular/core@^10 The Angular CLI update Help Commands and Options Usage: ng update [options] Options: ##### --all default:false Deprecated Whether to update all packages in package.json. ##### --allow-dirty Whether to allow updating when the repository contains modified or untracked files. ##### --create-commits default:false Create source control commits for updates and migrations. Aliases: -C ##### --force default:false Ignore peer dependency version mismatches. Passes the --force flag to the package manager when installing packages. ##### --from Version from which to migrate from. Only available with a single package being updated, and only on migration only. ##### --help default:false Shows a help message for this command in the console. ##### --migrate-only Only perform a migration, do not update the installed version. ##### --next default:false Use the prerelease version, including beta and RCs. ##### --packages The names of package(s) to update. ##### --to Version up to which to apply migrations. Only available with a single package being updated, and only on migrations only. Requires from to be specified. Default to the installed version detected. ##### --verbose default:false Display additional details about internal operations during execution.
cr42

How to use the Angular CLI test command line

Runs unit tests in a project. Takes the name of the project, as specified in the projects section of the angular.json workspace configuration file. When a project name is not supplied, it will execute for all projects. Set up testing ng test ng t Compiling @angular/core/testing : es2015 as esm2015 Compiling @angular/platform-browser/testing : es2015 as esm2015 Compiling @angular/compiler/testing : es2015 as esm2015 Compiling @angular/common/testing : es2015 as esm2015 Compiling @angular/router/testing : es2015 as esm2015 Compiling @angular/platform-browser-dynamic/testing : es2015 as esm2015 ⠙ Generating browser application bundles (phase: building)...29 06 2021 10:02:41.970:WARN [karma]: No captured browser, open http://localhost:9876/ 29 06 2021 10:02:41.980:INFO [karma-server]: Karma v6.1.2 server started at http://localhost:9876/ 29 06 2021 10:02:41.981:INFO [launcher]: Launching browsers Chrome with concurrency unlimited 29 06 2021 10:02:41.985:INFO [launcher]: Starting browser Chrome ✔ Browser application bundle generation complete. 29 06 2021 10:02:50.069:WARN [karma]: No captured browser, open http://localhost:9876/ ✔ Browser application bundle generation complete. 29 06 2021 10:02:50.766:INFO [Chrome 91.0.4472.114 (Mac OS 10.15.7)]: Connected on socket EgxnYMBGYuRr9ebhAAAB with id 56954275 Chrome 91.0.4472.114 (Mac OS 10.15.7): Executed 3 of 3 SUCCESS (0.261 secs / 0.184 secs) TOTAL: 3 SUCCESS The Angular CLI test Help Commands and Options Usage: ng test <project> [options] ng t <project> [options] Arguments: ##### <project> The name of the project to build. Can be an application or a library. ########Options: ##### --browsers Override which browsers tests are run against. ##### --code-coverage default:false Output a code coverage report. ##### --code-coverage-exclude Globs to exclude from code coverage. ##### --configuration One or more named builder configurations as a comma-separated list as specified in the "configurations" section of angular.json. The builder uses the named configurations to run the given target. Setting this explicitly overrides the "--prod" flag. Aliases: -c ##### --help default:false Shows a help message for this command in the console. ##### --include Globs of files to include, relative to workspace or project root. There are 2 special cases: when a path to directory is provided, all spec files ending ".spec.@(ts|tsx)" will be included when a path to a file is provided, and a matching spec file exists it will be included instead ##### --inline-style-language default:css The stylesheet language to use for the application's inline component styles. ##### --karma-config The name of the Karma configuration file. ##### --main The name of the main entry-point file. ##### --poll Enable and define the file watching poll time period in milliseconds. ##### --polyfills The name of the polyfills file. ##### --preserve-symlinks Do not use the real path when resolving modules. If unset then will default to true if NodeJS option --preserve-symlinks is set. ##### --prod Deprecated: Use --configuration production instead. Shorthand for "--configuration=production". Set the build configuration to the production target. By default, the production target is set up in the workspace configuration such that all builds make use of bundling, limited tree-shaking, and also limited dead code elimination. ##### --progress default:true Log progress to the console while building. ##### --reporters Karma reporters to use. Directly passed to the karma runner. ##### --source-map default:true Output source maps for scripts and styles. ##### --ts-config The name of the TypeScript configuration file. ##### --watch Run build when files change. ##### --web-worker-ts-config TypeScript configuration for Web Worker modules.
cr41

How to use the Angular CLI serve command line

Builds and serves your app, rebuilding on file changes. ng serve is a great command to use when developing your application locally. It starts up a local development server, which will serve your application while you are developing it. It is not meant to be used for a production environment. Builds and serves your app ng serve ng s Alias for the command ng run [project]:serve Build at: 2021-06-29T01:46:17.730Z - Hash: 46ebb73a5e8b7c265ada - Time: 18490ms ** Angular Live Development Server is listening on localhost:4200, open your browser on http://localhost:4200/ ** ✔ Compiled successfully. ✔ Browser application bundle generation complete. Port to listen on. Port ng serve --port=4008 ng s --port=4008 Build at: 2021-06-29T01:49:43.345Z - Hash: 46ebb73a5e8b7c265ada - Time: 7333ms ** Angular Live Development Server is listening on localhost:4008, open your browser on http://localhost:4008/ ** The Angular CLI serve Help Commands and Options Usage: ng serve <project> [options] ng s <project> [options] Arguments: ##### <project> The name of the project to build. Can be an application or a library. ########Options: ##### --allowed-hosts List of hosts that are allowed to access the dev server. ##### --aot Deprecated: Use the "aot" option in the browser builder instead. Build using Ahead of Time compilation. ##### --base-href Deprecated: Use the "baseHref" option in the browser builder instead. Base url for the application being built. ##### --browser-target A browser builder target to serve in the format of project:target[:configuration]. You can also pass in more than one configuration name as a comma-separated list. Example: project:target:production,staging. ##### --common-chunk Deprecated: Use the "commonChunk" option in the browser builder instead. Generate a seperate bundle containing code used across multiple bundles. ##### --configuration One or more named builder configurations as a comma-separated list as specified in the "configurations" section of angular.json. The builder uses the named configurations to run the given target. Setting this explicitly overrides the "--prod" flag. Aliases: -c ##### --deploy-url Deprecated: Use the "deployUrl" option in the browser builder instead. URL where files will be deployed. ##### --disable-host-check default:false Don't verify connected clients are part of allowed hosts. ##### --help default:false Shows a help message for this command in the console. ##### --hmr default:false Enable hot module replacement. ##### --hmr-warning default:true Deprecated: No longer has an effect. Show a warning when the --hmr option is enabled. ##### --host default:localhost Host to listen on. ##### --live-reload default:true Whether to reload the page on change, using live-reload. ##### --open default:false Opens the url in default browser. Aliases: -o ##### --optimization Deprecated: Use the "optimization" option in the browser builder instead. Enables optimization of the build output. Including minification of scripts and styles, tree-shaking, dead-code elimination, tree-shaking and fonts inlining. ##### --poll Enable and define the file watching poll time period in milliseconds. ##### --port default:4200 Port to listen on. ##### --prod Deprecated: Use --configuration production instead. Shorthand for "--configuration=production". Set the build configuration to the production target. By default, the production target is set up in the workspace configuration such that all builds make use of bundling, limited tree-shaking, and also limited dead code elimination. ##### --progress Deprecated: Use the "progress" option in the browser builder instead. Log progress to the console while building. ##### --proxy-config Proxy configuration file. ##### --public-host The URL that the browser client (or live-reload client, if enabled) should use to connect to the development server. Use for a complex dev server setup, such as one with reverse proxies. ##### --serve-path The pathname where the app will be served. ##### --serve-path-default-warning default:true Deprecated: No longer has an effect. Show a warning when deploy-url/base-href use unsupported serve path values. ##### --source-map Deprecated: Use the "sourceMap" option in the browser builder instead. Output source maps for scripts and styles. ##### --ssl default:false Serve using HTTPS. ##### --ssl-cert SSL certificate to use for serving HTTPS. ##### --ssl-key SSL key to use for serving HTTPS. ##### --vendor-chunk Deprecated: Use the "vendorChunk" option in the browser builder instead. Generate a seperate bundle containing only vendor libraries. This option should only used for development. ##### --verbose Adds more details to output logging. ##### --watch default:true Rebuild on change.
cr40

How to use the Angular CLI new command line

Create an Angular workspace.Creates and initializes a new Angular application that is the default project for a new workspace. Provides interactive prompts for optional configuration, such as adding routing support. All prompts can safely be allowed to default. Create ng new workspace ng n workspace ng serve ? Do you want to enforce stricter type checking and stricter bundle budgets in the workspace? This setting helps improve maintainability and catch bugs ahead of time. For more information, see https://angular.io/strict Yes ? Would you like to add Angular routing? Yes ? Which stylesheet format would you like to use? SCSS [ https://sass-lang.com/documentation/syntax#scs s ] CREATE workspace/README.md (1019 bytes) The Angular CLI new Help Commands and Options Usage: ng new <name> [options] ng n <name> [options] Arguments: ##### <name> The name of the new workspace and initial project. ########Options: ##### --collection A collection of schematics to use in generating the initial application. Aliases: -c ##### --commit default:true Initial git repository commit information. ##### --create-application default:true Create a new initial application project in the 'src' folder of the new workspace. When false, creates an empty workspace with no initial application. You can then use the generate application command so that all applications are created in the projects folder. ##### --defaults Disable interactive input prompts for options with a default. ##### --directory The directory name to create the workspace in. ##### --dry-run default:false Run through and reports activity without writing out results. Aliases: -d ##### --force default:false Force overwriting of existing files. Aliases: -f ##### --help default:false Shows a help message for this command in the console. ##### --inline-style Include styles inline in the component TS file. By default, an external styles file is created and referenced in the component TypeScript file. Aliases: -s ##### --inline-template Include template inline in the component TS file. By default, an external template file is created and referenced in the component TypeScript file. Aliases: -t ##### --interactive Enable interactive input prompts. ##### --legacy-browsers default:false Deprecated: Legacy browsers support is deprecated since version 12. ##### --minimal default:false Create a workspace without any testing frameworks. (Use for learning purposes only.) ##### --new-project-root default:projects The path where new projects will be created, relative to the new workspace root. ##### --package-manager The package manager used to install dependencies. ##### --prefix default:app The prefix to apply to generated selectors for the initial project. Aliases: -p ##### --routing Generate a routing module for the initial project. ##### --skip-git default:false Do not initialize a git repository. Aliases: -g ##### --skip-install default:false Do not install dependency packages. ##### --skip-tests default:false Do not generate "spec.ts" test files for the new project. Aliases: -S ##### --strict default:true Creates a workspace with stricter type checking and stricter bundle budgets settings. This setting helps improve maintainability and catch bugs ahead of time. ##### --style The file extension or preprocessor to use for style files. ##### --verbose default:false Add more details to output logging. Aliases: -v ##### --view-encapsulation The view encapsulation strategy to use in the initial project.
cr39

How to use the Angular CLI generate command line

Generates and/or modifies files based on a schematic.Creates a new, generic NgModule definition in the given or default project. NgModule ng generate module page/edit ng g m page/edit CREATE src/app/page/edit/edit.module.ts (190 bytes) Creates a new, generic component definition in the given or default project. component ng generate component page/edit ng g c page/edit CREATE src/app/page/edit/edit.component.scss (0 bytes) CREATE src/app/page/edit/edit.component.html (19 bytes) CREATE src/app/page/edit/edit.component.spec.ts (612 bytes) CREATE src/app/page/edit/edit.component.ts (268 bytes) UPDATE src/app/page/edit/edit.module.ts (261 bytes) The Angular CLI generate Help Commands and Options Usage: ng generate <schematic> [options] ng g <schematic> [options] Arguments: ##### <schematic> The schematic or collection:schematic to generate.This option can take one of the following sub-commands:app-shell、application、class、component、directive、enum、guard、interceptor、interface、library、module、pipe、resolver、service、service-worker、web-worker ########Options: ##### --defaults Disable interactive input prompts for options with a default. ##### --dry-run default:false Run through and reports activity without writing out results. Aliases: -d ##### --force default:false Force overwriting of existing files. Aliases: -f ##### --help default:false Shows a help message for this command in the console. ##### --interactive Enable interactive input prompts. Schematic commands: app-shell Generates an app shell for running a server-side version of an app. Usage: ng generate app-shell [options] ng g app-shell [options] Options: ##### --app-dir default:app The name of the application directory. ##### --app-id default:serverApp The app ID to use in withServerTransition(). ##### --main default:main.server.ts The name of the main entry-point file. ##### --project The name of the related client app. ##### --root-module-class-name default:AppServerModule The name of the root module class. ##### --root-module-file-name default:app.server.module.ts The name of the root module file ##### --route default:shell Route path used to produce the app shell. Schematic commands: application Generates a new basic app definition in the "projects" subfolder of the workspace. Usage: ng generate application <name> [options] ng g application <name> [options] Arguments: ##### <name> The name of the new app. ########Options: ##### --inline-style Include styles inline in the root component.ts file. Only CSS styles can be included inline. Default is false, meaning that an external styles file is created and referenced in the root component.ts file. Aliases: -s ##### --inline-template Include template inline in the root component.ts file. Default is false, meaning that an external template file is created and referenced in the root component.ts file. Aliases: -t ##### --legacy-browsers default:false Deprecated: Legacy browsers support is deprecated since version 12. ##### --lint-fix Deprecated: Use "ng lint --fix" directly instead. Apply lint fixes after generating the application. ##### --minimal default:false Create a bare-bones project without any testing frameworks. (Use for learning purposes only.) ##### --prefix default:app A prefix to apply to generated selectors. Aliases: -p ##### --routing default:false Create a routing NgModule. ##### --skip-install default:false Skip installing dependency packages. ##### --skip-package-json default:false Do not add dependencies to the "package.json" file. ##### --skip-tests default:false Do not create "spec.ts" test files for the application. Aliases: -S ##### --strict default:true Creates an application with stricter bundle budgets settings. ##### --style default:css The file extension or preprocessor to use for style files. ##### --view-encapsulation The view encapsulation strategy to use in the new app. Schematic commands: class Creates a new, generic class definition in the given or default project. Usage: ng generate class <name> [options] ng g class <name> [options] Arguments: ##### <name> The name of the new class. ########Options: ##### --lint-fix default:false Deprecated: Use "ng lint --fix" directly instead. Apply lint fixes after generating the class. ##### --project The name of the project. ##### --skip-tests default:false Do not create "spec.ts" test files for the new class. ##### --type Adds a developer-defined type to the filename, in the format "name.type.ts". Schematic commands: component Creates a new, generic class definition in the given or default project. Usage: ng generate component <name> [options] ng g component <name> [options] ng g c <name> [options] Arguments: ##### <name> The name of the component. ########Options: ##### --change-detection default:Default The change detection strategy to use in the new component. Aliases: -c ##### --display-block default:false Specifies if the style will contain :host { display: block; }. Aliases: -b ##### --export default:false The declaring NgModule exports this component. ##### --flat default:false Create the new files at the top level of the current project. ##### --inline-style default:false Include styles inline in the component.ts file. Only CSS styles can be included inline. By default, an external styles file is created and referenced in the component.ts file. Aliases: -s ##### --inline-template default:false Include template inline in the component.ts file. By default, an external template file is created and referenced in the component.ts file. Aliases: -t ##### --lint-fix Deprecated: Use "ng lint --fix" directly instead. Apply lint fixes after generating the component. ##### --module The declaring NgModule. Aliases: -m ##### --prefix The prefix to apply to the generated component selector. Aliases: -p ##### --project The name of the project. ##### --selector The HTML selector to use for this component. ##### --skip-import default:false Do not import this component into the owning NgModule. ##### --skip-selector default:false Specifies if the component should have a selector or not. ##### --skip-tests default:false Do not create "spec.ts" test files for the new component. ##### --style default:css The file extension or preprocessor to use for style files. ##### --type default:Component Adds a developer-defined type to the filename, in the format "name.type.ts". ##### --view-encapsulation The view encapsulation strategy to use in the new component. Aliases: -v Schematic commands: directive Creates a new, generic directive definition in the given or default project. Usage: ng generate directive <name> [options] ng g directive <name> [options] Arguments: ##### <name> The name of the new directive. ########Options: ##### --export default:false The declaring NgModule exports this directive. ##### --flat default:true When true (the default), creates the new files at the top level of the current project. ##### --lint-fix Deprecated: Use "ng lint --fix" directly instead. Apply lint fixes after generating the directive. ##### --module The declaring NgModule. Aliases: -m ##### --prefix A prefix to apply to generated selectors. Aliases: -p ##### --project The name of the project. ##### --selector The HTML selector to use for this directive. ##### --skip-import default:false Do not import this directive into the owning NgModule. ##### --skip-tests default:false Do not create "spec.ts" test files for the new class. Schematic commands: enum Generates a new, generic enum definition for the given or default project. Usage: ng generate enum <name> [options] ng g enum <name> [options] Arguments: ##### <name> The name of the enum. ########Options: ##### --lint-fix Deprecated: Use "ng lint --fix" directly instead. Apply lint fixes after generating the enum. ##### --project The name of the project in which to create the enum. Default is the configured default project for the workspace. ##### --type Adds a developer-defined type to the filename, in the format "name.type.ts". Schematic commands: guard Generates a new, generic route guard definition in the given or default project. Usage: ng generate guard <name> [options] ng g guard <name> [options] Arguments: ##### <name> The name of the new route guard. ########Options: ##### --flat default:true When true (the default), creates the new files at the top level of the current project. ##### --implements Specifies which interfaces to implement. ##### --lint-fix Deprecated: Use "ng lint --fix" directly instead. Apply lint fixes after generating the guard. ##### --project The name of the project. ##### --skip-tests default:false Do not create "spec.ts" test files for the new guard. Schematic commands: interceptor Creates a new, generic interceptor definition in the given or default project. Usage: ng generate interceptor <name> [options] ng g interceptor <name> [options] Arguments: ##### <name> The name of the interceptor. ########Options: ##### --flat default:true When true (the default), creates files at the top level of the project. ##### --lint-fix Deprecated: Use "ng lint --fix" directly instead. Apply lint fixes after generating the interceptor. ##### --project The name of the project. ##### --skip-tests default:false Do not create "spec.ts" test files for the new interceptor. Schematic commands: interface Creates a new, generic interface definition in the given or default project. Usage: ng generate interface <name> <type> [options] ng g interface <name> <type> [options] Arguments: ##### <name> The name of the interface. ##### <type> Adds a developer-defined type to the filename, in the format "name.type.ts". ########Options: ##### --lint-fix Deprecated: Use "ng lint --fix" directly instead. Apply lint fixes after generating the interface. ##### --prefix A prefix to apply to generated selectors. ##### --project The name of the project. Schematic commands: library Creates a new, generic library project in the current workspace. Usage: ng generate library <name> [options] ng g library <name> [options] Arguments: ##### <name> The name of the library. ########Options: ##### --entry-file default:public-api The path at which to create the library's public API file, relative to the workspace root. ##### --lint-fix Deprecated: Use "ng lint --fix" directly instead. Apply lint fixes after generating the library. ##### --prefix default:lib A prefix to apply to generated selectors. Aliases: -p ##### --skip-install default:false Do not install dependency packages. ##### --skip-package-json default:false Do not add dependencies to the "package.json" file. ##### --skip-ts-config default:false Do not update "tsconfig.json" to add a path mapping for the new library. The path mapping is needed to use the library in an app, but can be disabled here to simplify development. Schematic commands: module Creates a new, generic NgModule definition in the given or default project. Usage: ng generate module <name> [options] ng g module <name> [options] ng g m <name> [options] Arguments: ##### <name> The name of the NgModule. ########Options: ##### --flat default:false Create the new files at the top level of the current project root. ##### --lint-fix Deprecated: Use "ng lint --fix" directly instead. Apply lint fixes after generating the module. ##### --module The declaring NgModule. Aliases: -m ##### --project The name of the project. ##### --route The route path for a lazy-loaded module. When supplied, creates a component in the new module, and adds the route to that component in the Routes array declared in the module provided in the --module option. ##### --routing default:false Create a routing module. ##### --routing-scope default:Child The scope for the new routing module. Schematic commands: pipe Creates a new, generic pipe definition in the given or default project. Usage: ng generate pipe <name> [options] ng g pipe <name> [options] Arguments: ##### <name> The name of the pipe. ########Options: ##### --export default:false The declaring NgModule exports this pipe. ##### --flat default:true When true (the default) creates files at the top level of the project. ##### --lint-fix default:false Deprecated: Use "ng lint --fix" directly instead. Apply lint fixes after generating the pipe. ##### --module The declaring NgModule. Aliases: -m ##### --project The name of the project. ##### --skip-import default:false Do not import this pipe into the owning NgModule. ##### --skip-tests default:false Do not create "spec.ts" test files for the new pipe. Schematic commands: resolver Generates a new, generic resolver definition in the given or default project. Usage: ng generate resolver <name> [options] ng g resolver <name> [options] Arguments: ##### <name> The name of the new resolver. ########Options: ##### --flat default:true When true (the default), creates the new files at the top level of the current project. ##### --project The name of the project. ##### --skip-tests default:false Do not create "spec.ts" test files for the new resolver. Schematic commands: service Creates a new, generic service definition in the given or default project. Usage: ng generate service <name> [options] ng g service <name> [options] Arguments: ##### <name> The name of the service. ########Options: ##### --flat default:true When true (the default), creates files at the top level of the project. ##### --lint-fix Deprecated: Use "ng lint --fix" directly instead. Apply lint fixes after generating the service. ##### --project The name of the project. ##### --skip-tests default:false Do not create "spec.ts" test files for the new service. Schematic commands: service-worker Pass this schematic to the "run" command to create a service worker Usage: ng generate service-worker [options] ng g service-worker [options] Options: ##### --configuration Deprecated: No longer has an effect. The configuration to apply service worker to. ##### --project The name of the project. ##### --target default:build The target to apply service worker to. Schematic commands: web-worker Creates a new, generic web worker definition in the given or default project. Usage: ng generate web-worker <name> [options] ng g web-worker <name> [options] Arguments: ##### <name> The name of the worker. ########Options: ##### --project The name of the project. ##### --snippet default:true Add a worker creation snippet in a sibling file of the same name. ##### --target default:build Deprecated: No longer has an effect. The target to apply web worker to.
cr38

How to use the Angular CLI build command line

Compiles an Angular app into an output directory named dist/ at the given output path.The command can be used to build a project of type "application" or "library". When used to build a library, a different builder is invoked, and only the ts-config, configuration, and watch options are applied. All other options apply only to building applications. Start with the production build: production build ng build ng b ng build --configuration=production Compiling @angular/core : es2015 as esm2015 Compiling @angular/common : es2015 as esm2015 Compiling @angular/platform-browser : es2015 as esm2015 Compiling @angular/router : es2015 as esm2015 Compiling @angular/platform-browser-dynamic : es2015 as esm2015 ✔ Browser application bundle generation complete. ✔ Copying assets complete. ✔ Index html generation complete. Initial Chunk Files | Names | Size vendor.js | vendor | 2.36 MB polyfills.js | polyfills | 128.77 kB main.js | main | 55.85 kB runtime.js | runtime | 6.15 kB styles.css | styles | 118 bytes | Initial Total | 2.54 MB Build at: 2021-06-28T03:27:03.978Z - Hash: f06216c42fb3364520f2 - Time: 22783ms Base url for the application being built. --base-href ng build --output-path output --base-href /tools/ ng b --output-path output --base-href /tools/ ✔ Browser application bundle generation complete. ✔ Copying assets complete. ✔ Index html generation complete. The Angular CLI build Help Commands and Options Usage: ng build <project> [options] ng b <project> [options] Arguments: ##### <project> The name of the project to build. Can be an application or a library. ########Options: ##### --allowed-common-js-dependencies A list of CommonJS packages that are allowed to be used without a build time warning. ##### --aot default:true Build using Ahead of Time compilation. ##### --base-href Base url for the application being built. ##### --build-optimizer default:true Enables '@angular-devkit/build-optimizer' optimizations when using the 'aot' option. ##### --common-chunk default:true Generate a seperate bundle containing code used across multiple bundles. ##### --configuration One or more named builder configurations as a comma-separated list as specified in the "configurations" section of angular.json. The builder uses the named configurations to run the given target. ##### --cross-origin default:none Define the crossorigin attribute setting of elements that provide CORS support. ##### --delete-output-path default:true Delete the output path before building. ##### --deploy-url URL where files will be deployed. ##### --extract-css default:true Deprecated: Deprecated since version 11.0. No longer required to disable CSS extraction for HMR.Extract CSS from global styles into '.css' files instead of '.js'. ##### --extract-licenses default:true Extract all licenses in a separate file. ##### --help default:false Shows a help message for this command in the console. ##### --i18n-missing-translation default:warning How to handle missing translations for i18n. ##### --index Configures the generation of the application's HTML index. ##### --inline-style-language default:css The stylesheet language to use for the application's inline component styles. ##### --localize Translate the bundles in one or more locales. ##### --main The full path for the main entry point to the app, relative to the current workspace. ##### --named-chunks default:false Use file name for lazy loaded chunks. ##### --ngsw-config-path Path to ngsw-config.json. ##### --optimization default:true Enables optimization of the build output. Including minification of scripts and styles, tree-shaking, dead-code elimination, inlining of critical CSS and fonts inlining. ##### --output-hashing default:none Define the output filename cache-busting hashing mode. ##### --output-path The full path for the new output directory, relative to the current workspace.By default, writes output to a folder named dist/ in the current project. ##### --poll Enable and define the file watching poll time period in milliseconds. ##### --polyfills The full path for the polyfills file, relative to the current workspace. ##### --preserve-symlinks Do not use the real path when resolving modules. If unset then will default to true if NodeJS option --preserve-symlinks is set. ##### --prod Deprecated: Use --configuration production instead.Shorthand for "--configuration=production". Set the build configuration to the production target. By default, the production target is set up in the workspace configuration such that all builds make use of bundling, limited tree-shaking, and also limited dead code elimination. ##### --progress default:true Log progress to the console while building. ##### --resources-output-path The path where style resources will be placed, relative to outputPath. ##### --service-worker default:false Generates a service worker config for production builds. ##### --show-circular-dependencies default:false Deprecated: The recommended method to detect circular dependencies in project code is to use either a lint rule or other external tooling. Show circular dependency warnings on builds. ##### --source-map default:false Output source maps for scripts and styles. ##### --stats-json default:false Generates a 'stats.json' file which can be analyzed using tools such as 'webpack-bundle-analyzer'. ##### --subresource-integrity default:false Enables the use of subresource integrity validation. ##### --ts-config The full path for the TypeScript configuration file, relative to the current workspace. ##### --vendor-chunk default:false Generate a seperate bundle containing only vendor libraries. This option should only used for development. ##### --verbose default:false Adds more details to output logging. ##### --watch default:false Run build when files change. ##### --web-worker-ts-config TypeScript configuration for Web Worker modules.
cr37

How to use the Angular CLI add command line

Adds support for an external library to your project.Adds the npm package for a published library to your workspace, and configures the project in the current working directory (or the default project if you are not in a project directory) to use that library, as specified by the library's schematic. For example, adding @angular/pwa configures your project for PWA support: pwa ng add @angular/pwa ℹ Using package manager: npm ✔ Found compatible package version: @angular/pwa@latest. ✔ Package information loaded. ✔ Package successfully installed. CREATE ngsw-config.json (624 bytes) ...... UPDATE angular.json (3882 bytes) UPDATE package.json (1251 bytes) UPDATE src/app/app.module.ts (789 bytes) UPDATE src/index.html (475 bytes) ✔ Packages installed successfully. For example you run the command ng add @angular/material, it will automatically install the package and configure in angular.json file too. Install Angular Material ng add @angular/material ℹ Using package manager: npm ✔ Found compatible package version: @angular/[email protected]. ✔ Package information loaded. The Angular CLI add Help Commands and Options Usage: ng add <collection> [options] Arguments: ##### <collection> The package to be added. ########Options: ##### --defaults Disable interactive input prompts for options with a default. ##### --help default:false Shows a help message for this command in the console. ##### --interactive Enable interactive input prompts. ##### --registry The NPM registry to use. ##### --skip-confirmation default:false Skip asking a confirmation prompt before installing and executing the package. Ensure package name is correct prior to using this option. ##### --verbose default:false Display additional details about internal operations during execution.
cr36

How to use the Angular CLI command line

Development tools and libraries specialized for Angular. How to use the Angular CLI command line The Angular CLI is a command-line interface tool that you use to initialize, develop, scaffold, and maintain Angular applications directly from a command shell. Install the CLI using the npm package manager: Install npm install -g @angular/cli ng version Angular CLI: 11.2.14 Node: 14.17.1 OS: darwin x64 To create, build, and serve a new, basic Angular project on a development server, go to the parent directory of your new workspace use the following commands: Create ng new project cd project ng serve ? Do you want to enforce stricter type checking and stricter bundle budgets in the workspace? This setting helps improve maintainability and catch bugs ahead of time. For more information, see https://angular.io/strict Yes ? Would you like to add Angular routing? Yes ? Which stylesheet format would you like to use? SCSS [ https://sass-lang.com/documentation/syntax#scss ] CREATE project/README.md (1017 bytes) ...... Compiles an Angular app into an output directory ng build ✔ Browser application bundle generation complete. ✔ Copying assets complete. ✔ Index html generation complete. Initial Chunk Files | Names | Size vendor.js | vendor | 2.36 MB polyfills.js | polyfills | 128.77 kB main.js | main | 55.85 kB runtime.js | runtime | 6.15 kB styles.css | styles | 118 bytes | Initial Total | 2.54 MB Build at: 2021-06-27T12:30:22.658Z - Hash: f06216c42fb3364520f2 - Time: 10516ms The Angular CLI Help Commands and Options Options: ##### add Adds support for an external library to your project. ##### analytics Configures the gathering of Angular CLI usage metrics. See https://angular.io/cli/usage-analytics-gathering. ##### build (b) Compiles an Angular app into an output directory named dist/ at the given output path. Must be executed from within a workspace directory. ##### deploy Invokes the deploy builder for a specified project or for the default project in the workspace. ##### config Retrieves or sets Angular configuration values in the angular.json file for the workspace. ##### doc (d) Opens the official Angular documentation (angular.io) in a browser, and searches for a given keyword. ##### e2e (e) Builds and serves an Angular app, then runs end-to-end tests using Protractor. ##### extract-i18n (i18n-extract, xi18n) Extracts i18n messages from source code. ##### generate (g) Generates and/or modifies files based on a schematic. ##### help Lists available commands and their short descriptions. ##### lint (l) Runs linting tools on Angular app code in a given project folder. ##### new (n) Creates a new workspace and an initial Angular application. ##### run Runs an Architect target with an optional custom builder configuration defined in your project. ##### serve (s) Builds and serves your app, rebuilding on file changes. ##### test (t) Runs unit tests in a project. ##### update Updates your application and its dependencies. See https://update.angular.io/ ##### version (v) Outputs Angular CLI version.
cr35

How to use the Docker-compose up command line

Create and start containers. Builds, (re)creates, starts, and attaches to containers for a service. If you want to force Compose to stop and recreate all containers, use the --force-recreate flag. Recreate containers docker-compose up -d --force-recreate Recreating docker_mysql_1 ... done Recreating docker_redis_1 ... done Recreating docker_php_1 ... done Recreating docker_nginx_1 ... done Docker-compose up Help Commands and Options Usage: docker-compose up [options] [--scale SERVICE=NUM...] [SERVICE...] Options: ##### -d, --detach Detached mode: Run containers in the background,print new container names. Incompatible with --abort-on-container-exit. ##### --no-color Produce monochrome output. ##### --quiet-pull Pull without printing progress information ##### --no-deps Don't start linked services. ##### --force-recreate Recreate containers even if their configuration and image haven't changed. ##### --always-recreate-deps Recreate dependent containers. Incompatible with --no-recreate. ##### --no-recreate If containers already exist, don't recreate hem. Incompatible with --force-recreate and --renew-anon-volumes. ##### --no-build Don't build an image, even if it's missing. ##### --no-start Don't start the services after creating them. ##### --build Build images before starting containers. ##### --abort-on-container-exit Stops all containers if any container was stopped. Incompatible with --detach. ##### --attach-dependencies Attach to dependent containers. ##### -t, --timeout TIMEOUT Use this timeout in seconds for container shutdown when attached or when containers are already running. (default: 10) ##### -V, --renew-anon-volumes Recreate anonymous volumes instead of retrieving data from the previous containers. ##### --remove-orphans Remove containers for services not defined in the Compose file. ##### --exit-code-from SERVICE Return the exit code of the selected service container. Implies --abort-on-container-exit. ##### --scale SERVICE=NUM Scale SERVICE to NUM instances. Overrides the `scale` setting in the Compose file if present.
cr34

How to use the Docker-compose exec command line

Execute a command in a running container. With this subcommand you can run arbitrary commands in your services. Commands are by default allocating a TTY, so you can use a command such as docker-compose exec web sh to get an interactive prompt. Nginx docker-compose exec nginx sh PHP docker-compose exec php sh /etc/nginx # PHP Version docker exec `docker ps | grep '9000/tcp' |awk '{print $1}'` php -v php -v docker-compose exec php sh|php -v Docker-compose exec Help Commands and Options Usage: docker-compose exec [options] [-e KEY=VAL...] SERVICE COMMAND [ARGS...] Options: ##### -d, --detach Detached mode: Run command in the background. ##### --privileged Give extended privileges to the process. ##### -u, --user USER Run the command as this user. ##### -T Disable pseudo-tty allocation. By default `docker-compose exec` allocates a TTY. ##### --index=index index of the container if there are multiple instances of a service [default: 1] ##### -e, --env KEY=VAL Set environment variables (can be used multiple times, not supported in API < 1.25) ##### -w, --workdir DIR Path to workdir directory for this command.
cr33

How to use the Docker-compose build command line

Build or rebuild services. If you change a service’s Dockerfile or the contents of its build directory, run docker-compose build to rebuild it. Docker-Compose build will read your docker-compose.yml or docker-compose.yaml. This step enables you to add docker-compose commands to build images: Version information docker-compose build docker-compose -f docker-compose.yml build ERROR: Can't find a suitable configuration file in this directory or any parent. Are you in the right directory? Supported filenames: docker-compose.yml, docker-compose.yaml Build the Redis with Docker Compose Do not use cache when building the image. docker-compose build --no-cache redis Show more output docker-compose --verbose build redis Don't print anything to `STDOUT`. docker-compose build redis -q Run containers in the background docker-compose up -d redis compose.config.config.find: Using configuration files: ./docker-compose.yml docker.utils.config.find_config_file: Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] docker.utils.config.find_config_file: No config file found docker.utils.config.find_config_file: Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] docker.utils.config.find_config_file: No config file found urllib3.connectionpool._make_request: http://localhost:None "GET /v1.25/version HTTP/1.1" 200 858 compose.cli.command.get_client: docker-compose version 1.25.0, build 0a186604 docker-py version: 4.1.0 CPython version: 3.7.4 OpenSSL version: OpenSSL 1.1.0l 10 Sep 2019 ...... urllib3.connectionpool._make_request: http://localhost:None "GET /v1.25/networks/tmp_default HTTP/1.1" 404 44 compose.project.build: redis uses an image, skipping Docker-compose build Help Commands and Options Usage: docker-compose build [options] [--build-arg key=val...] [SERVICE...] Options: ##### --build-arg key=val Set build-time variables for services. ##### --compress Compress the build context using gzip. ##### --force-rm Always remove intermediate containers. ##### -m, --memory MEM Set memory limit for the build container. ##### --no-cache Do not use cache when building the image. ##### --no-rm Do not remove intermediate containers after a successful build. ##### --parallel Build images in parallel. ##### --progress string Set type of progress output (`auto`, `plain`, `tty`). ##### --pull Always attempt to pull a newer version of the image. ##### -q, --quiet Don't print anything to `STDOUT`.
cr32

How to install DropzoneJS

DropzoneJS is an open source library that provides drag’n’drop file uploads with image previews.--pname-- is a JavaScript library that turns any HTML element into a dropzone. This means that a user can drag and drop a file onto it, and Dropzone will display file previews and upload progress, and handle the upload for you via XHR. It’s lightweight, doesn’t depend on any other library (like jQuery) and is highly customizable. Installation You can download the latest version of --pname-- from the GitHub releases or use a --pname-- CDN.
cr31

How to use the Docker-compose command line

This page provides the usage information for the docker-compose Command.The following show the Docker-Compose version information: Version information docker-compose version docker-compose -v docker-compose --version docker-compose version 1.25.0, build 0a186604 docker-py version: 4.1.0 CPython version: 3.7.4 OpenSSL version: OpenSSL 1.1.0l 10 Sep 2019 Use -f to specify name and path of one or more Compose files: Specify an alternate compose file docker-compose -f ~/docker-compose.yml multiple override files docker-compose up -f override.yml override2.yml Pulling db (postgres:latest)... latest: Pulling from library/postgres ef0380f84d05: Pull complete 50cf91dc1db8: Pull complete ...... cecf11b8ccf3: Pull complete Digest: sha256:1364924c753d5ff7e2260cd34dc4ba05ebd40ee8193391220be0f9901d4e1651 Status: Downloaded newer image for postgres:latest If you change a service’s Dockerfile or the contents of its build directory, run docker-compose build to rebuild it. builds the images from the dockerfiles docker-compose build docker-compose build nginx starts the services defined in the docker-compose.yml file -d stands for detached docker-compose up -d Recreate containers docker-compose up -d --force-recreate docker_redis is up-to-date docker_php is up-to-date docker_nginx is up-to-date Docker-compose Help Commands and Options Define and run multi-container applications with Docker. Usage: docker-compose [-f <arg>...] [--profile <name>...] [options] [COMMAND] [ARGS...] docker-compose -h|--help Options: ##### -f, --file FILE Specify an alternate compose file (default: docker-compose.yml) ##### -p, --project-name NAME Specify an alternate project name (default: directory name) ##### --profile NAME Specify a profile to enable ##### --verbose Show more output ##### --log-level LEVEL Set log level (DEBUG, INFO, WARNING, ERROR, CRITICAL) ##### --no-ansi Do not print ANSI control characters ##### -v, --version Print version and exit ##### -H, --host HOST Daemon socket to connect to ##### --tls Use TLS; implied by --tlsverify ##### --tlscacert CA_PATH Trust certs signed only by this CA ##### --tlscert CLIENT_CERT_PATH Path to TLS certificate file ##### --tlskey TLS_KEY_PATH Path to TLS key file ##### --tlsverify Use TLS and verify the remote ##### --skip-hostname-check Don't check the daemon's hostname against the name specified in the client certificate ##### --project-directory PATH Specify an alternate working directory (default: the path of the Compose file) ##### --compatibility If set, Compose will attempt to convert deploy keys in v3 files to their non-Swarm equivalent Commands: ##### build Build or rebuild services ##### bundle Generate a Docker bundle from the Compose file ##### config Validate and view the Compose file ##### create Create services ##### down Stop and remove containers, networks, images, and volumes ##### events Receive real time events from containers ##### exec Execute a command in a running container ##### help Get help on a command ##### images List images ##### kill Kill containers ##### logs View output from containers ##### pause Pause services ##### port Print the public port for a port binding ##### ps List containers ##### pull Pull service images ##### push Push service images ##### restart Restart services ##### rm Remove stopped containers ##### run Run a one-off command ##### scale Set number of containers for a service ##### start Start services ##### stop Stop services ##### top Display the running processes ##### unpause Unpause services ##### up Create and start containers ##### version Show the Docker-Compose version information docker-compose config Usage: docker-compose config [options] Options: ##### --resolve-image-digests Pin image tags to digests. ##### --no-interpolate Don't interpolate environment variables. ##### -q, --quiet Only validate the configuration, don't print anything. ##### --services Print the service names, one per line. ##### --volumes Print the volume names, one per line. ##### --hash="*" Print the service config hash, one per line. Set "service1,service2" for a list of specified services or use the wildcard symbol to display all services. docker-compose down Usage: docker-compose down [options] Options: ##### --rmi type Remove images. Type must be one of:'all': Remove all images used by any service.'local': Remove only images that don't have a custom tag set by the `image` field. ##### -v, --volumes Remove named volumes declared in the `volumes` section of the Compose file and anonymous volumes attached to containers. ##### --remove-orphans Remove containers for services not defined in the Compose file ##### -t, --timeout TIMEOUT Specify a shutdown timeout in seconds.(default: 10) docker-compose events Usage: docker-compose events [options] Options: ##### --json Output events as a stream of json objects Output events as a stream of json objects docker-compose events --json { "time": "2020-11-20T18:01:03.615550", "type": "container", "action": "create", "id": "213cf7...5fc39a", "service": "web", "attributes": { "name": "application_web_1", "image": "alpine:edge" } } docker-compose images Usage: docker-compose images [options] [SERVICE...] Options: #####-q, --quiet Only display IDs List images docker-compose images Container Image Id Size ---------------------------------------- docker_mysql_1 1e4405fe1ea9 436.6 MB docker_nginx_1 6d9f77a25acb 21.19 MB docker_php_1 5ae217886afb 617.8 MB docker_redis_1 a49ff3e0d85f 29.31 MB docker-compose kill Usage: docker-compose kill [options] [SERVICE...] Options: ##### -s SIGNAL SIGNAL to send to the container. Default signal is SIGKILL. docker-compose logs Usage: docker-compose logs [options] [SERVICE...] Options: ##### --no-color Produce monochrome output. ##### -f, --follow Follow log output. ##### -t, --timestamps Show timestamps. ##### --tail="all" Number of lines to show from the end of the logs for each container. --tail docker-compose logs --tail=2 redis Attaching to docker_redis_1 redis_1 | 1:M 22 Jun 2021 10:31:37.546 * DB loaded from disk: 0.009 seconds redis_1 | 1:M 22 Jun 2021 10:31:37.546 * Ready to accept connections docker-compose ps Usage: docker-compose ps [options] [SERVICE...] Options: ##### -q, --quiet Only display IDs ##### --services Display services ##### --filter KEY=VAL Filter services by a property ##### -a, --all Show all stopped containers (including those created by the run command) docker-compose pull Usage: docker-compose pull [options] [SERVICE...] Options: ##### --ignore-pull-failures Pull what it can and ignores images with pull failures. ##### --parallel Deprecated, pull multiple images in parallel (enabled by default). ##### --no-parallel Disable parallel pulling. ##### -q, --quiet Pull without printing progress information ##### --include-deps Also pull services declared as dependencies
cr30