*nix

From Things and Stuff Wiki
Revision as of 00:42, 26 February 2016 by Milk (talk | contribs) (→‎Boot)
Jump to navigation Jump to search


Mainly Linux. See also GUI, etc.

Guides

Articles

System

See also Distros

Standards

POSIX / SUS

  • POSIX, an acronym for "Portable Operating System Interface", is a family of standards specified by the IEEE for maintaining compatibility between operating systems. POSIX defines the application programming interface (API), along with command line shells and utility interfaces, for software compatibility with variants of Unix and other operating systems.
  • http://en.wikipedia.org/wiki/Single_UNIX_Specification - the collective name of a family of standards for computer operating systems, compliance with which is required to qualify for to use the "UNIX" trademark. The core specifications of the SUS are developed and maintained by the Austin Group, which is a joint working group of IEEE, ISO JTC 1 SC22 and The Open Group.
  • POSIX.1-2008 (IEEE Std 1003.1™-2008 / The Open Group Technical Standard Base Specifications, Issue 7). The 2013 edition incorporates Technical Corrigendum 1 addressing problems discovered since the approval of the 2008 edition.

POSIX.1-2008 defines a standard operating system interface and environment, including a command interpreter (or “shell”), and common utility programs to support applications portability at the source code level. POSIX.1-2008 is intended to be used by both application developers and system implementors and comprises four major components (each in an associated volume):

  • General terms, concepts, and interfaces common to all volumes of this standard, including utility conventions and C-language header definitions, are included in the Base Definitions volume.
  • Definitions for system service functions and subroutines, language-specific system services for the C programming language, function issues, including portability, error handling, and error recovery, are included in the System Interfaces volume.
  • Definitions for a standard source code-level interface to command interpretation services (a “shell”) and common utility programs for application programs are included in the Shell and Utilities volume.
  • Extended rationale that did not fit well into the rest of the document structure, which contains historical information concerning the contents of POSIX.1-2008 and why features were included or discarded by the standard developers, is included in the Rationale (Informative) volume.

The following areas are outside the scope of POSIX.1-2008:

  • Graphics interfaces
  • Database management system interfaces
  • Record I/O considerations
  • Object or binary code portability
  • System configuration and resource availability

Linux Standard Base

  • Linux Standard Base (LSB) is a joint project by several Linux distributions under the organizational structure of the Linux Foundation to standardize the software system structure, including the filesystem hierarchy, used with Linux operating system. The LSB is based on the POSIX specification, the Single UNIX Specification, and several other open standards, but extends them in certain areas.

freedesktop.org

  • freedesktop.org is open source / open discussion software projects working on interoperability and shared technology for X Window System desktops. The most famous X desktops are GNOME and KDE, but developers working on any Linux/UNIX GUI technology are welcome to participate. freedesktop.org is building a base platform for desktop software on Linux and UNIX. The elements of this platform have become the backend for higher-level application-visible APIs such as Qt, GTK+, XUL, VCL, WINE, GNOME, and KDE. The base platform is both software and specifications.
$XDG_DATA_HOME defines the base directory relative to which user specific data files should be stored. If $XDG_DATA_HOME is either not set or empty, a default equal to $HOME/.local/share should be used.

$XDG_CONFIG_HOME defines the base directory relative to which user specific configuration files should be stored. If $XDG_CONFIG_HOME is either not set or empty, a default equal to $HOME/.config should be used.

$XDG_DATA_DIRS defines the preference-ordered set of base directories to search for data files in addition to the $XDG_DATA_HOME base directory. The directories in $XDG_DATA_DIRS should be seperated with a colon ':'.

If $XDG_DATA_DIRS is either not set or empty, a value equal to /usr/local/share/:/usr/share/ should be used.

$XDG_CONFIG_DIRS defines the preference-ordered set of base directories to search for configuration files in addition to the $XDG_CONFIG_HOME base directory. The directories in $XDG_CONFIG_DIRS should be seperated with a colon ':'.

If $XDG_CONFIG_DIRS is either not set or empty, a value equal to /etc/xdg should be used.

The order of base directories denotes their importance; the first directory listed is the most important. When the same information is defined in multiple places the information defined relative to the more important base directory takes precedent. The base directory defined by $XDG_DATA_HOME is considered more important than any of the base directories defined by $XDG_DATA_DIRS. The base directory defined by $XDG_CONFIG_HOME is considered more important than any of the base directories defined by $XDG_CONFIG_DIRS.

UNIX

  • Ulix (Literate Unix) is a Unix-like operating system that was developed at University of Erlangen-Nuremberg. We used Donald E. Knuth's concept of Literate Programming for the implementation and documentation. The intention was to create a fully working system which can be used in operating system courses to show students how OS concepts (such as paging and scheduling) can be implemented. Literate programs are very accessible because they can be read like a book; the order of presentation is not enforced by program logic or compiler restrictions, but instead is guided by the implementer's creative process. Ulix was written in C and Assembler for the Intel x86 architecture; for literate programming we used Norman Ramsey's noweb tool.

Linux

  • Gittup - This is an entire(-ish) linux distribution in git. Everything is built with tup. That's why it's called gittup. You get the whole development history (where applicable) as part of the standard distribution.







Modules

  • Kernel modules are pieces of code that can be loaded and unloaded into the kernel upon demand. They extend the functionality of the kernel without the need to reboot the system.
  • modprobe is a Linux program originally written by Rusty Russell and used to add a loadable kernel module (LKM) to the Linux kernel or to remove an LKM from the kernel. It is commonly used indirectly: udev relies upon modprobe to load drivers for automatically detected hardware.
  • Dynamic Kernel Module Support (DKMS) is a framework used to generate Linux kernel modules whose sources do not generally reside in the Linux kernel source tree. DKMS enables kernel device drivers to be automatically rebuilt when a new kernel is installed. An essential feature of DKMS is that it automatically recompiles all DKMS modules if a new kernel version is installed. This allows drivers and devices outside of the mainline kernel to continue working after a Linux kernel upgrade. Another benefit of DKMS is that it allows the installation of a new driver on an existing system, running an arbitrary kernel version, without any need for manual compilation or precompiled packages provided by the vendor.

Tools

linux-zen

linux-rt

linux-ck

linux-pf

Nvidia 340 drivers not available.

kdbus

  • http://lwn.net/Articles/580194/ - "kdbus, an in-kernel implementation of D-Bus. This implementation is able to carry large amounts of data; it can be reasonably used for gigabyte-sized message streams. It can perform zero-copy message passing, but even in the worst case, a message and its response are passed with no more than two copy operations, two validations, and two context switches. Full credential information (user ID, process ID, SELinux label, control group information, capabilities, and much more) is passed with each message, and all messages carry timestamps. Kdbus is always available to the system (no need to wait for the D-Bus daemon to be started), Linux security modules can hook into it directly, various race conditions have been fixed, and the API has simplified."

BSD

Paramaters

cat /proc/sys/net/ipv4/ip_forward

Libraries

  • ldconfig - configure dynamic linker run-time bindings

Tools

uname -a


Boot

Network book

  • https://en.wikipedia.org/wiki/Preboot_Execution_Environment - PXE, sometimes pronounced as pixie, a specification that describes a standardized client-server environment that boots a software assembly, retrieved from a network, on PXE-enabled clients. On the client side it requires only a PXE-capable network interface controller (NIC), and uses a small set of industry-standard network protocols such as DHCP and TFTP.

The concept behind the PXE originated in the early days of protocols like BOOTP/DHCP/TFTP, and as of 2015 it forms part of the Unified Extensible Firmware Interface (UEFI) standard. Given fast and reliable local area networks (LANs), PXE is the most frequent choice for operating system booting, installation and deployment.


Boot loaders

"In order to boot Arch Linux, a Linux-capable boot loader such as GRUB(2), Syslinux, LILO or GRUB Legacy must be installed to the Master Boot Record or the GUID Partition Table. The boot loader is responsible for loading the kernel and initial ramdisk before initiating the boot process."

GRUB2

Syslinux

rEFInd

Kernal loading

initrd

  • initrd (initial ramdisk) is a scheme for loading a temporary root file system into memory in the boot process of the Linux kernel. initrd and refer to two different methods of achieving this. Both are commonly used to make preparations before the real root file system can be mounted.

initramfs

  • mkinitcpio is the next generation of initramfs creation.
ps -p 1 -o comm=

init

  • init (short for initialization) is a daemon process that is the direct or indirect ancestor of all other processes. It automatically adopts all orphaned processes. Init is the first process started during booting, and is typically assigned PID number 1. It is started by the kernel using a hard-coded filename, and if the kernel is unable to start it, a kernel panic will result. Init continues running until the system is shut down. The design of init has diverged in Unix systems such as System III and System V, from the functionality provided by the init in Research Unix and its BSD derivatives. The usage on most Linux distributions is compatible with System V, but some distributions, such as Slackware, use a BSD-style and others, such as Gentoo, have their own customized version. Several replacement init implementations have been written which attempt to address design limitations in the standard versions. These include systemd and Upstart, the latter being used by Ubuntu and some other Linux distributions.

Research UNIX / BSD

SysVinit

daemontools

runit

  • runsv - starts and monitors a service and optionally an appendant log service
  • runsvdir - starts and monitors a collection of runsv(8) processes
  • chpst - runs a program with a changed process state
  • svlogd - runit’s service logging daemon

Supervisor

  • Supervisor is a client/server system that allows its users to monitor and control a number of processes on UNIX-like operating systems. It shares some of the same goals of programs like launchd, daemontools, and runit. Unlike some of these programs, it is not meant to be run as a substitute for init as “process id 1”. Instead it is meant to be used to control processes related to a project or a customer, and is meant to start like any other program at boot time. 2004.

Upstart

  • http://upstart.ubuntu.com/ - 2006
    • http://en.wikipedia.org/wiki/Upstart - an event-based replacement for the traditional init daemon – the method by which several Unix-like computer operating systems perform tasks when the computer is started. It was written by Scott James Remnant, a former employee of Canonical Ltd. Upstart operates asynchronously; it handles starting of the tasks and services during boot and stopping them during shutdown, and also supervises the tasks and services while the system is running. Easy transition and perfect backwards compatibility with sysvinit were the explicit design goals. Upstart allows for extensions to its event model through the use of initctl to input custom, single events, or event bridges to integrate many or more-complicated events. By default, Upstart includes bridges for socket, dbus, udev, file, and dconf events; additionally, more bridges (for example, a Mach ports bridge, or a devd (found on FreeBSD systems) bridge) are possible.

OpenRC

  • OpenRC is a dependency based init system that works with the system provided init program, normally /sbin/init. It is not a replacement for /sbin/init. OpenRC is 100% compatible with Gentoo init scripts, which means you can probably find one for the daemons you want to start in the Gentoo Portage Tree. OpenRC, however, is not exclusively used by Gentoo Linux, so our goal is to be platform-agnostic. 2007.

systemd

  • systemd is a system and service manager for Linux, compatible with SysV and LSB init scripts. systemd provides aggressive parallelization capabilities, uses socket and D-Bus activation for starting services, offers on-demand starting of daemons, keeps track of processes using Linux control groups, supports snapshotting and restoring of the system state, maintains mount and automount points and implements an elaborate transactional dependency-based service control logic. It can work as a drop-in replacement for sysvinit. 2010.
Commands
systemd-analyze
  list basic boot time

systemd-analyze blame
  list specific boot timings

systemd-analyze critical-chain
  list blocking boot timings 

Documentation


Articles


launchd

sinit

Other

Reboot and shutdown

Command-line

TTY / console

  • https://en.wikipedia.org/wiki/Computer_terminal - an electronic or electromechanical hardware device that is used for entering data into, and displaying data from, a computer or a computing system. Early terminals were inexpensive devices but very slow compared to punched cards or paper tape for input, but as the technology improved and video displays were introduced, terminals pushed these older forms of interaction from the industry. A related development was timesharing systems, which evolved in parallel and made up for any inefficiencies of the user's typing ability with the ability to support multiple users on the same machine, each at their own terminal.

The function of a terminal is confined to display and input of data; a device with significant local programmable data processing capability may be called a "smart terminal" or fat client. A terminal that depends on the host computer for its processing power is called a "dumb terminal" or thin client. A personal computer can run terminal emulator software that replicates the function of a terminal, sometimes allowing concurrent use of local programs and access to a distant terminal host system.


  • https://en.wikipedia.org/wiki/System_console - or computer console, root console, operator's console, or simply console is the text entry and display device for system administration messages, particularly those from the BIOS or boot loader, the kernel, from the init system and from the system logger. It is a physical device consisting of a keyboard and a screen, and traditionally is a text terminal, but may also be a graphical terminal. System consoles are generalized to computer terminals, which are abstracted respectively by virtual consoles and terminal emulators. Today communication with system consoles is generally done abstractly, via the standard streams (stdin, stdout, and stderr), but there may be system-specific interfaces, for example those used by the system kernel.


  • https://en.wikipedia.org/wiki/Virtual_console (VC) – also known as a virtual terminal (VT) – is a conceptual combination of the keyboard and display for a computer user interface. It is a feature of some operating systems such as BSD, Linux, and UnixWare in which the system console of the computer can be used to switch between multiple virtual consoles to access unrelated user interfaces. Virtual consoles date back at least to Xenix in the 1980s.

Usually in Linux (see Linux console), the first six virtual consoles provide a text terminal with a login prompt to a Unix shell. The graphical X Window System starts in the seventh virtual console. In Linux, the user switches between them with the key combination Alt plus a function key – for example Alt+F1 to access the virtual console number 1. Alt+Left arrow changes to the previous virtual console and Alt+Right arrow to the next virtual console. To switch from the X Window System, Ctrl+Alt+function key works. (Note that users can redefine these default key combinations.)

The need for virtual consoles has lessened now that most applications work in the graphical framework of the X Window System, where each program has a window and the text mode programs can be run in terminal-emulator windows. If several sessions of the X Window System are required to run in parallel, such as in the case of fast user switching or when debugging X programs on a separate X server, each X session usually runs in a separate virtual console.


  • https://en.wikipedia.org/wiki/POSIX_terminal_interface - the generalized abstraction, comprising both an Application Programming Interface for programs, and a set of behavioural expectations for users of a terminal, as defined by the POSIX standard and the Single Unix Specification. It is a historical development from the terminal interfaces of BSD version 4 and Seventh Edition Unix.
$XDG_VTNR
  # returns vty number


  • https://en.wikipedia.org/wiki/C0_and_C1_control_codes - or control character sets define control codes for use in text by computer systems that use the ISO/IEC 2022 system of specifying control and graphic characters. Most character encodings, in addition to representing printable characters, also have characters such as these that represent additional information about the text, such as the position of a cursor, an instruction to start a new line, or a message that the text has been received.


  • https://en.wikipedia.org/wiki/Escape_character - a character which invokes an alternative interpretation on subsequent characters in a character sequence. An escape character is a particular case of metacharacters. Generally, the judgement of whether something is an escape character or not depends on context.
  • https://en.wikipedia.org/wiki/Escape_sequence - a series of characters used to change the state of computers and their attached peripheral devices. These are also known as control sequences, reflecting their use in device control. Some control sequences are special characters that always have the same meaning. Escape sequences use an escape character to change the meaning of the characters which follow it, meaning that the characters can be interpreted as a command to be executed rather than as data.



  • https://en.wikipedia.org/wiki/Terminfo - a library and database that enables programs to use display terminals in a device-independent manner. Mark Horton implemented the first terminfo library in 1981-1982 as an improvement over termcap.[1] The improvements include faster access to stored terminal descriptions, longer, more understandable names for terminal capabilities and general expression evaluation for strings sent to the terminal.


  • http://linux.die.net/man/1/toe - table of (terminfo) entries - With no options, toe lists all available terminal types by primary name with descriptions. File arguments specify the directories to be scanned; if no such arguments are given, your default terminfo directory is scanned. If you also specify the -h option, a directory header will be issued as each directory is entered.

/usr/share/terminfo/?/*

 # Compiled terminal description database.
  • http://linux.die.net/man/1/tic - the terminfo entry-description compiler. The command tic translates a terminfo file from source format into compiled format. The compiled format is necessary for use with the library routines in ncurses.


  • infocmp - compare or print out terminfo descriptions


  • http://linux.die.net/man/1/captoinfo - convert a termcap description into a terminfo description. captoinfo looks in a file for termcap descriptions. For each one found, an equivalent terminfo description is written to standard output. Termcap tc capabilities are translated directly to terminfo use capabilities.
  • http://linux.die.net/man/1/infotocap - convert a terminfo description into a termcap description. infotocap looks in file for terminfo descriptions. For each one found, an equivalent termcap description is written to standard output. Terminfo use capabilities are translated directly to termcap tc capabilities.


See also Typography#Terminal


Getty

  • http://en.wikipedia.org/wiki/Getty_(Unix) - short for "get teletype", is a Unix program running on a host computer that manages physical or virtual terminals (TTYs). When it detects a connection, it prompts for a username and runs the 'login' program to authenticate the user.
  • https://wiki.archlinux.org/index.php/Getty - getty is the generic name for a program which manages a terminal line and its connected terminal. Its purpose is to protect the system from unauthorized access. Generally, each getty process is started by init and manages a single terminal line. Within the context of a typical Arch Linux installation, the terminals managed by the getty processes are implemented as virtual consoles.

agetty

mgetty

fgetty

Qingu

PYT / terminal emulator

  • https://en.wikipedia.org/wiki/Pseudoterminal - or pseudotty, or PTY is a pair of pseudo-devices, one of which, the slave, emulates a real text terminal device, the other of which, the master, provides the means by which a terminal emulator process controls the slave.

If it's not a hardware terminal (teleprinter/tty), it's a pseudoterminal, although tty is also used to mean pty.

  • https://en.wikipedia.org/wiki/Terminal_emulator - or terminal application, term, or tty for short, is a program that emulates a video terminal within some other display architecture. Though typically synonymous with a shell or text terminal, the term terminal covers all remote terminals, including graphical interfaces. A terminal emulator inside a graphical user interface is often called a terminal window.



Linux console

  • https://en.wikipedia.org/wiki/Linux_console - a system console internal to the Linux kernel (a system console is the device which receives all kernel messages and warnings and which allows logins in single user mode).

The Linux kernel supports virtual consoles - consoles that are logically separate, but which access the same physical keyboard and display.[2] The Linux console (and Linux virtual consoles) are implemented by the VT subsystem of the Linux kernel, and do not rely on any user space software.[3] This is in contrast to a terminal emulator, which is a user space process that emulates a terminal, and is typically used in a graphical display environment.

Kmscon

  • https://en.wikipedia.org/wiki/Kmscon - a virtual console which runs in userspace and intends to replace the Linux console, a terminal built into the Linux kernel. Kmscon uses the KMS driver for its output, it is multi-seat capable, supports internationalized keyboard input and UTF-8 terminal output. The input support is implemented using X keyboard extension (XKB). Development of Kmscon stopped in March 2015. There is a new project called systemd-consoled.

systemd-consoled

fbcon

  • framebuffer console (fbcon), as its name implies, is a text console running on top of the framebuffer device. It has the functionality of any standard text console driver, such as the VGA console, with the added features that can be attributed to the graphical nature of the framebuffer.

In the x86 architecture, the framebuffer console is optional, and some even treat it as a toy. For other architectures, it is the only available display device, text or graphical.

What are the features of fbcon? The framebuffer console supports high resolutions, varying font types, display rotation, primitive multihead, etc. Theoretically, multi-colored fonts, blending, aliasing, and any feature made available by the underlying graphics card are also possible.

fbterm



Xterm

  • How to change the title of an xterm - This document explains how to use escape sequences to dynamically change window and icon titles of an xterm. Examples are given for several shells, and the appendix gives escape sequences for some other terminal types.

urxvt

if not using a tiling window manager, tabbed is good + remote tmux. urxvt has overhead because of unicode.

urxvtd -q -f -o
urxvtc -e command
urxvt -e sh -c "ls && zsh"
  # launch command then exit to zsh [29]
  • Arch Forum: [Solved Change urxvt's font on the fly: Can YOU do it?]
 printf '\33]50;%s\007' "xft:Terminus:pixelsize=16"

urxvt sets char width as widest char in font. there's a patched package for that.

st

-*-terminus-medium-r-*-*-10-*-*-*-*-*-*-*

Other

  • LilyTerm is a terminal emulator based off of libvte that aims to be fast and lightweight, Licensed under GPLv3.
  • termite - A keyboard-centric VTE-based terminal, aimed at use within a window manager with tiling and/or tabbing support.
  • Final Term is a new breed of terminal emulator. It goes beyond mere emulation and understands what is happening inside the shell it is hosting. This allows it to offer features no other terminal can, including: Semantic text menus, Smart command completion, GUI terminal controls [30]
  • HTerm - A graphical terminal requiring no serverside support. [31]

Retro;

/usr/lib/xscreensaver/apple2 -text -fast -program bash
  • cool-retro-term is a terminal emulator which mimics the look and feel of the old cathode tube screens. It has been designed to be eye-candy, customizable, and reasonably lightweight. It uses the Konsole engine which is powerful and mature. This terminal emulator requires Qt 5.2 or higher to run.


  • Nixar - Joyable equivalents for existent linux commands

Dropdown

Web

  • FireSSH is a free, cross-platform SSH terminal client for Mozilla Firefox and Google Chrome. Written entirely in Javascript!
  • KeyBox - A web-based SSH console that executes commands on multiple shells. KeyBox allows you to manage keys, share terminal commands, and upload files to multiple systems simultaneously. [33]
  • oterm is a web browser unix terminal. It serves a console so you can access your server from anywhere in the world where you have an Internet connection. It effectively is an xterm in a browser. [34]

Text-based user interface

Terminal multiplexing

Screen

config goes in ~/.screenrc

escape ^Ww
  change escape key to w

Tmux

Config goes in ~/.tmux.conf, which can be symlinked to a hidden git repo folder.

Better than screen.

Articles
Commands
tmux lsc
  list clients

tmux detach-client -t /dev/pts/26
  remove other clients from session (if screensize is fucked)
split-window
bind-key -n C-S-Left swap-window -t -1
bind-key -n C-S-Right swap-window -t +1

open new window in current pwd;

bind <key of your choice> default-path $(pwd) \; split-window\; set default-path ~/
tmux_pwd () {
   [ -z "${TMUX}" ] && return
   tmux set default-path $(pwd) > /dev/null && tmux new-window
   (( sleep 300;
   tmux set default-path ~/ > /dev/null; ) & ) > /dev/null 2>&1
}

alias tpwd="tmux_pwd"
Additions
  • tmux-resurrect saves all the little details from your tmux environment so it can be completely restored after a system restart (or when you feel like it). No configuration is required. You should feel like you never quit tmux. [37]
  • https://github.com/bsandrow/tmux.sh - A simple script to allow defaulting to grouped sessions. This is just a way to make the ability for two clients to connect and switch windows indepentendly from each other.

dvtm

  • dvtm - brings the concept of tiling window management, popularized by X11-window managers like dwm to the console. As a console window manager it tries to make it easy to work with multiple console based programs.

Pymux

Logging

  • Snoopy Logger - designed to aid a sysadmin by providing a log of commands executed. Snoopy is completely transparent to the user and applications. It is linked into programs to provide a wrapper around calls to execve(). Logging is done via syslog.
  • sniffy - allows to trace/log the data of any pseudo terminal in the system. Due to the way the terminal works, such a terminal trace provides complete information of what happened on the terminal screen and sniffy is able to display/replay this information.
  • ttysnoop - allows you to snoop on login tty's through another tty-device or pseudo-tty. The snoop-tty becomes a 'clone' of the original tty, redirecting both input and output from/to it.
  • Rootsh is a wrapper for shells which logs all echoed keystrokes and terminal output to a file and/or to syslog. It's main purpose is the auditing of users who need a shell with root privileges. They start rootsh through the sudo mechanism.
  • shwatchr is a small Perl script that audits logins to shell accounts on Linux/*NIX machines that originate from arbitrary hosts on the Internet.


Misc

  • abduco - provides session management i.e. it allows programs to be run independently from their controlling terminal. That is programs can be detached - run in the background - and then later reattached. Together with dvtm it provides a simpler and cleaner alternative to tmux or screen.
  • Mosh (mobile shell) - Remote terminal application that allows roaming, supports intermittent connectivity, and provides intelligent local echo and line editing of user keystrokes. Mosh is a replacement for SSH. It's more robust and responsive, especially over Wi-Fi, cellular, and long-distance links.


  • retty is a tiny tool that lets you attach processes running on other terminals. So you were running that mutt outside of screen at your home machine and now wanna check your mail? Attach it with retty, do whatever you want, detach it again and everything is as it was before. You don't have to run them all in screen just in case.

Colour

Traditionally, terminals could display 2 colours. Then some could display 16 (ANSI) colours (or 8 or 88 colours) for foreground and background.

Most modern terminals support 256, if not 24-bit colours.

tput colors
  # query the terminfo db for number of colours supported


"Setting $TERM in the shell profile is generally a bad idea, since you may wish to use different terminals. It is preferable to add it to .Xdefaults (or similar file read by X applications):"

XTerm*termName: xterm-256color


"xterm has a 256 color mode extension, FbTerm also add it in this version. But xterm's 256 color escape sequences conflict with the linux sequences implemented by FbTerm, so private escape sequences were introduced to support this feature and a new terminfo database entry named "fbterm" was added to use these private sequences, all program based on terminfo should work with it. By default, FbTerm sets environment variable "TERM" to value "linux", you need run "TERM=fbterm /path/to/program" to enable 256 color mode."

dircolors

ls takes its color settings from the environment variable LS_COLORS. dircolors is merely a convenient way to generate this environment variable. To have this environment variable take effect system-wide, put it in your shell's startup file.

  • dircolors - color setup for ls. Output commands to set the LS_COLORS environment variable.
  • dir_colors - configuration file for dircolors - /etc/DIR_COLORS, ~/.dir_colors, etc.
dircolors -p > ~/.dircolors
  # creates the dircolors confuglration file (a basic script that sets and exports LS_COLORS variable)

For Bash in ~/.bashrc and Zsh in ~/.zshrc, add:

eval $(dircolors ~/.dircolors)

Testing

(x=`tput op` y=`printf %76s`;for i in {0..256};do o=00$i;echo -e ${o:${#o}-3:3} `tput setaf $i;tput setab $i`${y// /=}$x;done)
  # list all colours

echo $LS_COLORS | tr : '\n' | sed 's/\(.*\)=\(.*\)/\x1b[\2m\1\t\2\x1b[0m/'
  # print dircolor config with colours
#!/usr/bin/bash
#
#   This file echoes a bunch of color codes to the 
#   terminal to demonstrate what's available.  Each 
#   line is the color code of one foreground color,
#   out of 17 (default + 16 escapes), followed by a 
#   test use of that color on all nine background 
#   colors (default + 8 escapes).
#

T='gYw'   # The test text

echo -e "\n                 40m     41m     42m     43m\
    44m     45m     46m     47m";

for FGs in '    m' '   1m' '  30m' '1;30m' '  31m' '1;31m' '  32m' \
          '1;32m' '  33m' '1;33m' '  34m' '1;34m' '  35m' '1;35m' \
          '  36m' '1;36m' '  37m' '1;37m';
 do FG=${FGs// /}
 echo -en " $FGs \033[$FG  $T  "
 for BG in 40m 41m 42m 43m 44m 45m 46m 47m;
   do echo -en "$EINS \033[$FG\033[$BG  $T  \033[0m";
 done
 echo;
done
echo

Articles

Themes

For ZSH, Vim, Tmux, etc.

Tools

Threads

Keyboard

  • xcape allows you to use a modifier key as another key when pressed and released on its own. Note that it is slightly slower than pressing the original key, because the pressed event does not occur until the key is released. The default behaviour is to generate the Escape key when Left Control is pressed and released on its own. (If you don't understand why anybody would want this, I'm guessing that Vim is not your favourite text editor ;)

Mouse

  • gpm is the mouse support for Linux on the console

Other

Shell

/etc/shells

General

to find;

  • way of making previous command screen output be pushed to a buffer that can be flipped through/forked?
  • paste pwd to readline/zle
  • make alt-delete not break things
Alt-left/right
  move forwards/back one word
  • chsh - change your login shell
    • https://en.wikipedia.org/wiki/Chsh - is a command on Unix-like operating systems that is used to change a login shell. Users can either supply the pathname of the shell that they wish to change to on the command line, or supply no arguments, in which case chsh allows the user to change the shell interactively. chsh is a setuid program that modifies the /etc/passwd file, and only allows ordinary users to modify their own login shells. The superuser can modify the shells of other users, by supplying the name of the user whose shell is to be modified as a command-line argument.

Echo

Pipes

  • Socketpipe directly connects the input/output of a remote process with local programs over a TCP/IP socket. Although the initial communication setup is performed through client-server intermediaries such as SSH or RSH, the communication channel that socketpipe establishes is a direct socket connection between the local and the remote processes, similar to a local pipe. The use of socketpipe removes the inefficiency of multiple data copies and context switches through RSH/SSH, and can in many cases offer dramatic throughput improvements.
  • Spiped (pronounced "ess-pipe-dee") is a utility for creating symmetrically encrypted and authenticated pipes between socket addresses, so that one may connect to one address (e.g., a UNIX socket on localhost) and transparently have a connection established to another address (e.g., a UNIX socket on a different system). This is similar to 'ssh -L' functionality, but does not use SSH and requires a pre-shared symmetric key.
  • Pipeglade - Graphical User Interfaces, The UNIX Way, a helper program that displays graphical user interfaces for other programs. It renders the GUI definition found in a GtkBuilder file (created using the Glade Interface Designer), and communicates with the main program solely via pipes or fifos. [49]

Here document

tr a-z A-Z << END_TEXT
  one two three
  four five six
  END_TEXT

This yields the output:

ONE TWO THREE
FOUR FIVE SIX

Redirection

The > and >> are redirection operators for FD's (File Descriptors)

In bash you have tree standard FD's that are the standard input (strin), the standard output (strout) and the standard error (strerr). These can also be called by FD 0, FD 1 and FD 2 respectively.

Normally you would have you would have all FD's pointing to the terminal, but this can be changed by using redirection.

For example, if you call:

command > log.txt

You will redirect the output to the file log.txt This is similar as calling:

command 1> log.txt

As this only redirects strout you will still be able to see the error in the terminal. In order to redirect strerr to you log.txt file you will have to run:

command 2> log.txt

Again, this only redirects strerr. If you wish to redirect both stdout and stderr you need to duplicate your stderr output to stdout by using the >& command.

command 1> log.txt 2>&1

To understand this command you need to read it form right to left, first a copy of stderr is made to stdout, then strout is redirected to the log.txt file.

When you use redirection in this way, bash will not look if the file exists or not and simply create one regardless if that implies erasing the existing one. If you want to avoid loosing what has already been written in your log file you can then use the >> command in the same ways explained above, but in this case all the outputs will be appended to existing files.

Readline

Used by bash and other programs.

IFS

Wildcards

Patterns

  • http://mywiki.wooledge.org/BashGuide/Patterns - BASH offers three different kinds of pattern matching. Pattern matching serves two roles in the shell: selecting filenames within a directory, or determining whether a string conforms to a desired format. On the command line you will mostly use globs. These are a fairly straight-forward form of patterns that can easily be used to match a range of files, or to check variables against simple rules. The second type of pattern matching involves extended globs, which allow more complicated expressions than regular globs. Since version 3.0, Bash also supports regular expression patterns. These will be useful mainly in scripts to test user input or parse data. (You can't use a regular expression to select filenames; only globs and extended globs can do that.)

Substitution

print \T\h\i\s\ \i\s\ \*\p\o\i\n\t\l\e\s\s\*\ \ 
     \-\ \b\u\t\ \v\a\l\i\d\!
% read string
  This is a *string* with various `special' characters
% print -r -- ${(q)string}
  This\ is\ a\ \*string\*\ with\ various\ \`special\'\ characters

sh

The Bourne shell, sh, was written by Stephen Bourne at AT&T as the original Unix command line interpreter; it introduced the basic features common to all the Unix shells, including piping, here documents, command substitution, variables, control structures for condition-testing and looping and filename wildcarding. The language, including the use of a reversed keyword to mark the end of a block, was influenced by ALGOL 68.

bash

man: echo

Basics

Options

Completion

Prompt

Etc.

Variables

  • declare or typeset - builtins, which are exact synonyms, permit modifying the properties of variables. This is a very weak form of the typing available in certain programming languages. The declare command is specific to version 2 or later of Bash. The typeset command also works in ksh scripts.

zsh

Configuration

  • zbkrc README file - This file describes startup files for Z-shell (zsh), bash and ksh

Commands are then read from $ZDOTDIR/.zshenv. If the shell is a login shell, commands are read from /etc/zprofile and then $ZDOTDIR/.zprofile. Then, if the shell is interactive, commands are read from /etc/zshrc and then $ZDOTDIR/.zshrc. Finally, if the shell is a login shell, /etc/zlogin and $ZDOTDIR/.zlogin are read.


Variables

ZLE

The ZSH equivalent of readline for bash.


bindkey "\e[A" history-beginning-search-backward
bindkey "\e[B" history-beginning-search-forward

Expansion

$_
  last entered word
$?
  returned exit code of last exec

Prompt

Completion

compdef _files my-local-python-script
# simple _files completion [56]

compdef '_files -g "*.(eps|ps|pdf)"' okular
# restrict to some file extensions

compdef -e 'words[1]=(git pull origin); service=git; (( CURRENT+=2 )); _git' ggl
# make alias use existing completion [57]

Plugins


  • https://github.com/oknowton/zsh-dwim - attempts to predict what you will want to do next. It provides a key binding (control-u) that will replace the current (or previous) command line with the command you will want to run next.


  • zsh-history-substring-search - This is a clean-room implementation of the Fish shell's history search feature, where you can type in any part of any previously entered command and press the UP and DOWN arrow keys to cycle through the matching commands. You can also use K and J in VI mode or ^P and ^N in EMACS mode for the same.
  • zsh-completions - Additional completion definitions for Zsh. This projects aims at gathering/developing new completion scripts that are not available in Zsh yet. The scripts are meant to be contributed to the Zsh project when stable enough.
  • FIZSH is the Friendly Interactive ZSHell. It is a front end to ZSH. It provides the user of ZSH with interactive syntax-highting and Matlab-like history search. It also has a both short and informative prompt.
  • Antigen - A plugin manager for zsh, inspired by oh-my-zsh and vundle. Antigen is a small set of functions that help you easily manage your shell (zsh) plugins, called bundles. The concept is pretty much the same as bundles in a typical vim+pathogen setup. Antigen is to zsh, what Vundle is to vim.

Themes

Frameworks

  • Prezto is the configuration framework for Zsh; it enriches the command line interface environment with sane defaults, aliases, functions, auto completion, and prompt themes.

Dotfiles

lshell

  • lshell is a shell coded in Python, that lets you restrict a user's environment to limited sets of commands, choose to enable/disable any command over SSH (e.g. SCP, SFTP, rsync, etc.), log user's commands, implement timing restriction, and more.

git-shell

  • git-shell - Restricted login shell for Git-only SSH access

Inferno

  • Inferno is a distributed operating system, originally developed at Bell Labs, but now developed and maintained by Vita Nuova® as Free Software. Applications written in Inferno's concurrent programming language, Limbo, are compiled to its portable virtual machine code (Dis), to run anywhere on a network in the portable environment that Inferno provides. Unusually, that environment looks and acts like a complete operating system.

fish

fishfish

Other

  • Xiki: A shell console with GUI features. Xiki does what shell consoles do, but lets you edit everything at any time. It's trivial to make your own commands and menus to access other tools. [58]
  • Pdsh is a high-performance, parallel remote shell utility. It uses a sliding window of threads to execute remote commands, conserving socket resources while allowing some connections to timeout if needed. It was originally written as a replacement for IBM's DSH on clusters at LLNL.
  • shok is a cross-platform non-POSIX interactive command shell. Similar to bash and cmd.exe, it allows the user to invoke programs and script their execution. shok will provide a secure, platform-agnostic, and user-friendly programming language that is designed to facilitate filesystem interaction and system management. [59]
  • Perl Shell (psh) combines aspects of bash and other shells with the power of Perl scripting.

History

Commands

Resources

manuals

  • bro pages are a highly readable supplement to man pages with concise, common-case examples for Unix commands

SSH

See SSH

General

to sort

Use output as input.

cat `locate file.txt`

or

cat $(locate file.txt)

Processes

  • https://en.wikipedia.org/wiki/Process_(computing) - an instance of a computer program that is being executed. It contains the program code and its current activity. Depending on the operating system (OS), a process may be made up of multiple threads of execution that execute instructions concurrently.



  • https://en.wikipedia.org/wiki/Process_control_block - also called Task Controlling Block, process table, Task Struct, or Switchframe, is a data structure in the operating system kernel containing the information needed to manage a particular process. The PCB is "the manifestation of a process in an operating system".


System calls

  • https://en.wikipedia.org/wiki/System_call - how a program requests a service from an operating system's kernel. This may include hardware-related services (for example, accessing a hard disk drive), creation and execution of new processes, and communication with integral kernel services such as process scheduling. System calls provide an essential interface between a process and the operating system.
  • https://en.wikipedia.org/wiki/Fork_(system_call) - an operation whereby a process creates a copy of itself. It is usually a system call, implemented in the kernel. Fork is the primary (and historically, only) method of process creation on Unix-like operating systems.

/proc

  • https://en.wikipedia.org/wiki/Procfs - a special filesystem in UNIX-like operating systems that presents information about processes and other system information in a hierarchical file-like structure, providing a more convenient and standardized method for dynamically accessing process data held in the kernel than traditional tracing methods or direct access to kernel memory. Typically, it is mapped to a mount point named /proc at boot time. The proc file system acts as an interface to internal data structures in the kernel. It can be used to obtain information about the system and to change certain kernel parameters at runtime (sysctl).

More than a few operating systems support the proc filesystem, including Solaris, IRIX, Tru64 UNIX, BSD, Linux (Linux kernel extends it to non–process-related data), IBM AIX (which bases its implementation on Linux to improve compatibility), QNX, and Plan 9 from Bell Labs. The proc filesystem provides a method of communication between kernel space and user space. For example, the GNU version of ps uses the procfs to obtain its data, without using any specialized system calls.

  • free - Report the amount of free and used memory in the system
  • kill - Send a signal to a process based on PID
  • pgrep - List processes based on name or other attributes
  • pkill - Send a signal to a process based on name or other attributes
  • pmap - Report memory map of a process
  • ps - Report information of processes
  • pwdx - Report current directory of a process
  • skill - Obsolete version of pgrep/pkill
  • slabtop - Display kernel slab cache information in real time
  • snice - Renice a process
  • sysctl - Read or Write kernel parameters at run-time
  • tload - Graphical representation of system load average
  • top - Dynamic real-time view of running processes
  • uptime - Display how long the system has been running
  • vmstat - Report virtual memory statistics
  • w - Report logged in users and what they are doing
  • watch - Execute a program periodically, showing output fullscreen


ps ax | grep -process-


  • fuser - identifies what processes are using files.
  • killall - kills a process by its name, similar to a pkill found in some other Unices.
  • pstree - Shows currently running processes in a tree format.
  • peekfd - Peek at file descriptors of running processes.

Inter-process communication

  • https://en.wikipedia.org/wiki/Inter-process_communication - the activity of sharing data across multiple and commonly specialized processes using communication protocols. Typically, applications using IPC are categorized as clients and servers, where the client requests data and the server responds to client requests. Many applications are both clients and servers, as commonly seen in distributed computing. Methods for achieving IPC are divided into categories which vary based on software requirements, such as performance and modularity requirements, and system circumstances, such as network bandwidth and latency.


A signal is a software interrupt delivered to a process. The operating system uses signals to report exceptional situations to an executing program. Some signals report errors such as references to invalid memory addresses; others report asynchronous events, such as disconnection of a phone line.

Interrupts

  • https://en.wikipedia.org/wiki/Interrupt - a signal to the processor emitted by hardware or software indicating an event that needs immediate attention. An interrupt alerts the processor to a high-priority condition requiring the interruption of the current code the processor is executing. The processor responds by suspending its current activities, saving its state, and executing a function called an interrupt handler (or an interrupt service routine, ISR) to deal with the event. This interruption is temporary, and, after the interrupt handler finishes, the processor resumes normal activities.
  • https://en.wikipedia.org/wiki/Interrupt_handler - a callback function in microcontroller firmware, an operating system or a device driver, whose execution is triggered by the reception of an interrupt. In general, interrupts and their handlers are used to handle high-priority conditions that require the interruption of the current code the processor is executing.

Interrupt handlers have a multitude of functions, which vary based on the reason the interrupt was generated and the speed at which the interrupt handler completes its task. For example, pressing a key on a computer keyboard, or moving the mouse, triggers interrupts that call interrupt handlers which read the key, or the mouse's position, and copy the associated information into the computer's memory. An interrupt handler is a low-level counterpart of event handlers. These handlers are initiated by either hardware interrupts or interrupt instructions in software, and are used for servicing hardware devices and transitions between protected modes of operation such as system calls.

  • https://en.wikipedia.org/wiki/Exception_handling - the process of responding to the occurrence, during computation, of exceptions – anomalous or exceptional conditions requiring special processing – often changing the normal flow of program execution. It is provided by specialized programming language constructs or computer hardware mechanisms.

Utils

at bg chroot fg time


kill

In Unix and Unix-like operating systems, kill is a command used to send a signal to a process. By default, the message sent is the termination signal, which requests that the process exit. But kill is something of a misnomer; the signal sent may have nothing to do with process killing. The kill command is a wrapper around the kill() system call, which sends signals to processes or process groups on the system, referenced by their numeric process IDs (PIDs) or process group IDs (PGIDs). kill is always provided as a standalone utility as defined by the POSIX standard. However, most shells have built-in kill commands that may slightly differ from it.

A process can be sent a SIGTERM signal in four ways (the process ID is '1234' in this case):

kill 1234
kill -s TERM 1234
kill -TERM 1234
kill -15 1234

The process can be sent a SIGKILL signal in three ways:

kill -s KILL 1234
kill -KILL 1234
kill -9 1234
killall -u username
  terminate all username's applications to log them out
nohup
  • nohup - Run a command immune to hangups, runs the given command with hangup signals ignored, so that the command can continue running in the background after you log out.
Debugging
  • https://en.wikipedia.org/wiki/DTrace - a comprehensive dynamic tracing framework created by Sun Microsystems for troubleshooting kernel and application problems on production systems in real time. Originally developed for Solaris, it has since been released under the free Common Development and Distribution License (CDDL) and has been ported to several other Unix-like systems. In December 2012, Oracle announced the general availability of DTrace for Oracle Linux.
  • LTTng is an open source tracing framework for Linux.


  • Forkstat is a program that logs process fork(), exec() and exit() activity. It is useful for monitoring system behaviour and to track down rogue processes that are spawning off processes and potentially abusing the system.
  • ps-watcher – A Perl Program for Monitoring a System via ps-like Commands.


  • https://en.wikipedia.org/wiki/Ptrace - system call found in several Unix and Unix-like operating systems. By using ptrace (the name is an abbreviation of "process trace") one process can control another, enabling the controller to inspect and manipulate the internal state of its target. ptrace is used by debuggers and other code-analysis tools, mostly as aids to software development.


Memory

Configuration

/etc
~/.config

/usr/local/etc/

etc.

Dotfiles

  • Awesome dotfiles - A curated list of dotfiles resources. Inspired by the awesome list thing.
  • http://dotfiles.github.io/
  • dotphiles - A community driven framework of dotfiles, for the usual terminal apps and shells, designed to work across multiple platforms and degrade for older versions of software or O/S, allowing you to use the same settings on all your machines.

Management

See also Stack#Provision_and_configuration

mr
  • myrepos - You have a lot of version control repositories. Sometimes you want to update them all at once. Or push out all your local changes. You use special command lines in some repositories to implement specific workflows. Myrepos provides a mr command, which is a tool to manage all your version control repositories. All you need to get started is some already checked out repos. These could be using git, or bzr, mercurial or darcs, or many other version control systems. Doesn't matter, they're all supported!
mr register.
  # run inside each of your repositories, sets up a ~/.mrconfig file listing your repositories.

In general, any mr command runs recursively over any repository located somewhere in or under the current directory. These work no matter which version control system is used for a repository.

mr update     
  # update all repos. run inside a repository, it'll only act on that repository.
mr up
  # shortcut

mr -j5 update
  # will run 5 concurrent jobs!
mr commit

mr push    # or
mr pu

mr status

mr diff, and a lot of other commands.  Of course, you can still use the native version control commands too.

etc.
vcsh
  • vcsh - Version Control System for $HOME - multiple Git repositories in $HOME

enables single-purpose repositories and stores them in a hidden directory. However, it does not create symbolic links in $HOME; it puts the actual files right into $HOME.

vcsh + mr:

gaf
ghar
  • ghar can help you manage your $HOME in git using a collection of git repos seperated by topic or privacy. For example if you work on a variety of machines and want to share your .emacs on github but not your .ssh then ghar is for you.
GNU Stow
  • GNU Stow is a symlink farm manager which takes distinct packages of software and/or data located in separate directories on the filesystem, and makes them appear to be installed in the same place. For example, /usr/local/bin could contain symlinks to files within /usr/local/stow/emacs/bin, /usr/local/stow/perl/bin etc., and likewise recursively for any other subdirectories such as .../share, .../man, and so on. This is particularly useful for keeping track of system-wide and per-user installations of software built from source, but can also facilitate a more controlled approach to management of configuration files in the user's home directory, especially when coupled with version control systems.
XStow
homeshick
  • https://github.com/andsens/homeshick - bash stand-in for homesick. In Unix, configuration files are king. Tailoring tools to suit your needs through configuration can be empowering. An immense number of hours is spent on getting these adjustments just right, but once you leave the confines of your own computer, these local optimizations are left behind. By the power of git, homeshick enables you to bring the symphony of settings you have poured your heart into with you to remote computers. With it you can begin to focus even more energy on bettering your work environment since the benefits are reaped on whichever machine you are using. However bare bones these machines are, provided that at least bash 3 and git 1.5 are available you can use homeshick. homeshick can handle multiple dotfile repositories. This means that you can install larger frameworks like oh-my-zsh or a multitude of emacs or vim plugins alongside your own customizations without clutter.

Bash, single directory for repos.

Homesick
  • Homesick is sorta like rip, but for dotfiles. It uses git to clone a repository containing dotfiles, and saves them in ~/.homesick. It then allows you to symlink all the dotfiles into place with a single command. We call a repository that is compatible with homesick to be a 'castle'. To act as a castle, a repository must be organized like so: Contains a 'home' directory; 'home' contains any number of files and directories that begin with '.'.

Ruby GEM, single directory for repos.

seashell

Seashell is the basis for a dotfile management repo. Clone it, commit your dotfiles to it, and you're done. It doesn't depend on anything esoteric, just good old fashioned git, bash, and GNU make. The unit tests depend on shutil2. Features: Keeps your $HOME free of unnecessary git files (.git, .gitignore, .gitmodules, etc), Permits arbitrary directory structure (e.g. a/b/c will get moved to ~/a/b/c, creating any missing directories) The repo can live anywhere (e.g. ~/Dropbox/.dotfiles), The dotfiles can be exported anywhere (e.g. sudo make install DST_ROOT=/home/$someone_else)

Single directory for repos

DeeDot
  • DeeDot is a tool for installing dotfile symlinks in your $HOME, allowing you to keep your versioned dotfiles under a single directory.. DeeDot requires only Perl, and doesn't need any extra modules. Host your dotfiles in any version control system (eg. Git, Hg, SVN, even rsync), and voila, your dotfiles are available across multiple machines.

Single directory for repos

make

best for single repo?

dotfiler
  • https://github.com/svetlyak40wt/dotfiler - Clone this project somewhere like $HOME/.dotfiles and add $HOME/.dotfiles/bin into your PATH. Then clone some config files into the $HOME/.dotfiles . Run dot update to make all necessary symlinks.

single directory for repos


dotbot
  • https://github.com/anishathalye/dotbot - symlink manager with yaml/json config. Currently, Dotbot knows how to link files and folders, execute shell commands, and clean directories of broken symbolic links.

python

dottle

My main inspiration is dotbot from which I stole the configuration yaml file. POSIX compatible shell in /bin/sh (grep, sed, ln, mv, rm, mkdir, printf, etc.), curl and tar, git [optional] to get stuff from the internet

Config Curator
homedir

python, single repo

evil
  • Evil Tomato - A versioned portable roaming shell environment.

bash, single repo, symlinks

sds/dot

bash, single repo


dotsync
  • dotsync keeps your local dotfiles in sync with a git repository and keeps multiple remote machines in sync, either with them pulling from the git repo or pushed via rsync. Master servers can be assigned, for cases where groups of machines are behind firewalls or only accessible from a certain location. dotsync assumes that you have ssh setup correctly, with ssh-agent and configs for correct usernames etc, if you can't 'ssh hostname' it probably wont work and if your asked for passwords it will take hours with any number of machines.

Single repo, Uses dotryncrc config file.

Ceasar/dots
  • dots - Modular dotfile manager. Symlinks dotfiles from plugins to HOME. Used Python Fabric, a library and command-line tool for streamlining the use of SSH for application deployment or systems administration tasks. dead?
dotman
  • dotman - saves the day by allowing you to: Upload a set of master configs to a remote repository. Grab that config on each of your environments. ... and diff it against the previous config so you can make localized changes.
EvanPurkhiser/dots
  • https://github.com/EvanPurkhiser/dots - When installing your dotfiles onto a new machine, dots offers you the ability to select a specific 'group' of dotfiles that you would like to have installed into that environment. By organizing your dotfiles into logical groups (such as 'machine' groups) it's possible to only install the dotfiles that are required by that environment.

Python.

briefcase

Ruby, single repo

rcm
  • https://github.com/thoughtbot/rcm - rcup(1) is the main program. It is used to install and update dotfiles, with support for tags, host-specific files, and multiple source directories. rcdn(1) is the opposite of rcup(1). mkrc(1) is for introducing a dotfile into your dotfiles directory, with support for tags and multiple source directories. lsrc(1) shows you all your dotfiles and where they would be symlinked to. It is used by rcup(1) but is provided for your own use, too.

one folder. symlinks or copies.

thoughtbot/dotfiles
mackup
  • https://github.com/lra/mackup Mackup Keep your application settings in sync. Supported Storages Dropbox Google Drive Copy iCloud Box Anything able to sync a folder (e.g. Git)

Python, symlinks, single repo/folder

dotstuff

Copying rather than symlinking, which has the benefit that the source files can be passed through a simple preprocessor to do some customization for different systems. Also you can get a diff between your current files and the state that you just synced.

stipple


sshrc
  • sshrc - bring your .bashrc, .vimrc, etc. with you when you ssh. sshrc works just like ssh, but it also sources the ~/.sshrc on your local computer after logging in remotely. [82]
netskel
  • https://github.com/nugget/netskel - an http(s) based file synchronization tool which can be used to mirror a central repository of files to multiple UNIX shell accounts. It was built to create a simple and automated mechanism for users to push out their various shell environment config files to all the machines where they have shell accounts.

Daemons

A daemon is a program that runs in the background, waiting for events to occur and offering services. A good example is a web server that waits for a request to deliver a page or a ssh server waiting for someone trying to log in. While these are full featured applications, there are daemons whose work is not that visible. Daemons are for tasks like writing messages into a log file (e.g. syslog, metalog) or keeping your system time accurate (e.g. ntpd). For more information see man 7 daemon.

  • Daemon turns other process into daemons. There are many tasks that need to be performed to correctly set up a daemon process. This can be tedious. Daemon performs these tasks for other processes.

Programs

  • Vim - Text editor, etc.
  • Git - Distributed version control






GNU coreutils

nice
nice -n 19 [command]
  run a command at lowest priority
nice -n -20 [command]
  run a command at highest priority
tee
basename

GNU findutils

GNU Find Utilities are the basic directory searching utilities of the GNU operating system. These programs are typically used in conjunction with other programs to provide modular and powerful directory search and file locating capabilities to other commands.

The tools supplied with this package are:

  • find - search for files in a directory hierarchy
  • locate - list files in databases that match a pattern
  • updatedb - update a file name database
  • xargs - build and execute command lines from standard input
xargs

xargs reads items from the standard input, delimited by blanks (which can be protected with double or single quotes or a backslash) or newlines, and executes the command (default is /bin/echo) one or more times with any initial-arguments followed by items read from standard input. Blank lines on the standard input are ignored.

Because Unix filenames can contain blanks and newlines, this default behaviour is often problematic; filenames containing blanks and/or newlines are in‐correctly processed by xargs. In these situations it is better to use the -0 option, which prevents such problems. When using this option you will need to ensure that the program which produces the input for xargs also uses a null character as a separator. If that program is GNU find for example, the -print0 option does this for you.

If any invocation of the command exits with a status of 255, xargs will stop immediately without reading any further input. An error message is issued on stderr when this happens.

xargs -I rm {}

find . -name '*.py' | xargs wc -l
  Recursively find all Python files and count the number of lines

find . -name '*~' | xargs -0 rm
  Recursively find all Emacs backup files and remove them, alloging for filenames with whitespace

find . -name '*.py' | xargs grep 'import'
  Recursively find all Python files and search them for the word ‘import’

printf

tr

echo $PATH | tr \: \\n | sort

sed

echo "test string oldWord yadayada" | sed 's/oldWord/newWord/g'
find . -name "*.html" -exec sed -i "s/oldWord/newWord/g" '{}' \;
  replace text in multiple files [85]
 echo "<a href="index.html"><img src="logo.svg" id="site-logo"></a>
          <h1>Site Title</h1>" | sed 'N; s@</a>\
          <h1>Site Title</h1>@\
          <h1>Site Title</h1></a>@g'
   multiline replacement

awk

awk \'{print $NF;}\

ack

ag

ag --hidden --ignore .git --ignore .winscp -l -g ""
  # lists all files

to sort

  • auditd - userspace component to the Linux Auditing System [89]
  • shelr - console screencasting tool

~/.local/share/applications/mimeinfo.cache

If you want to write daemons in languages that can link against C functions (e.g. C, C++), see libslack which contains the core functionality of daemon.

Calculator

Desktop entries

Finding programs

whereis
apropos apropos vim
  search the whatis database for strings

Aliases

From http://news.ycombinator.com/item?id=4867369 :

Tab completion and aliases need not be mutually exclusive. For instance, in zsh you can use:

alias gs='git status'
compdef _git gs=git-status

FYI: I also mapped g without parameters to `git status -sb` but g with parameters will simply execute everything as normal

g () {
  if [ $# -eq 0 ]
  then
    git status -sb
  else
    git $*
  fi
}
compdef g=git

Monitoring

See also Server#Monitoring

time [command]
  return with command execution time

ps

pstree

perf

  • perf (sometimes "Perf Events" or perf tools, originally "Performance Counters for Linux", PCL) - is a user-space performance analyzing tool in Linux with git-like interface and subcommands. It is capable of statistical profiling of entire system (both kernel and user code), single CPU or severals threads.

lsof

  • lsof - "lists open files" (lots, given "everything" is a file)
lsof -i :[port]
  what application using a specific port

SYSSTAT

iostat, isag, mpstat, pidstat, sadf, sar

iostat -d -x 2

OProfile

top

Part of the procps package.

Current stats only, no disk or network. Sorts by cpu by default. A multicore box can show more than 100%.

z
  colour!
d
  change update interval
return, space
  refresh

H
  show separate threads of each process
i
  show run queue (R) or waiting on disk io (D, red)
1
  show serperate cpus

q
  quit

htop

atop

Shows short lived processes that run and finish between update interval times.

lines;

  • PRC: Total CPU time in system and user mode, total number of processes and of zombie processes, and the number of processes that exited during the polling interval.
  • CPU and CPL: CPU utilization and load (averaged over 1, 5 and 15 minutes).
  • cpu: individual CPU usage.
  • MEM and SWP: Amount of memory and swap space that is available and where it’s allocated. vmcom and vmlim show how much virtual memory space is committed and what the limit is.
  • DSK: disk utilization. avio shows the average number of milliseconds per request.
  • NET: Network utilization for the TCP layer (“transport”), the IP layer (“network”) and each interface.
Ctrl-f
  forward a page
Ctrl-b
  back a page

C
  sort by cpu activity
M
  sort by memory consumption
D
  sort by disk activity
N
  sort by network activity
A
  sort by most active system resource (auto mode)

P
  focus on process (regex)
y
  show all threads (inc. idle)

s
 Scheduling information
m
  Memory consumption
d
  Disk utilization
v
 Variable information
c
  Command line
p
 Accumulated per program
u
  Accumulated per user
n
 Network utilization (patched kernel)

i
  interval timer
t
  trigger update
atopsar -c 60 5
  current CPU utilization, five times with an interval of sixty seconds

atopsar -A -b 13:00 -e 13:35
  all available reports are generated, starting from 13:00 till 13:35

nmon

vmstat

collectl

free

  • free - Display amount of free and used memory in the system

Other

  • osop - other side of the pipe - outputs formatted metrics to your Stdout.
  • vpsinfo is a Linux server monitoring script, written in PHP, that provides web access to system status information. It gathers the output from several common Linux commands into one web page, providing a quick overview of the system's current state.
  • Dstat is a versatile replacement for vmstat, iostat, netstat and ifstat. Dstat overcomes some of their limitations and adds some extra features, more counters and flexibility. Dstat is handy for monitoring systems during performance tuning tests, benchmarks or troubleshooting.
  • rtop - Remote Server Monitoring over SSH [96]
input/output

Memory and/or disk.

iotop -obt -d2

Cron

A cronjob is a task that a Cron system is instructed to run periodically. The crontab file is a configuration file for a user that defines tasks to run under the user’s account. The systab file is a file that specifies cronjobs for the system.

Config

crontab -l
  view crontabs
crontab -e
  edit their crontabs
crontab -r
 remove their crontabs
crontab saved_crontab_filename
  overwrite their old crontab with saved crontab

There are several special predefined values which can be used to substitute the CRON expression.

Entry                  Description                                 Equivalent To
 @yearly (or @annually) Run once a year, midnight, Jan. 1st         0 0 1 1 *
 @monthly               Run once a month, midnight, first of month  0 0 1 * *
 @weekly                Run once a week, midnight on Sunday         0 0 * * 0
 @daily                 Run once a day, midnight                    0 0 * * *
 @hourly                Run once an hour, beginning of hour         0 * * * *
 @reboot                Run at startup                              @reboot 
*    *    *    *    *  command to be executed
┬    ┬    ┬    ┬    ┬
│    │    │    │    │
│    │    │    │    │
│    │    │    │    └───── day of week (0 - 6) (0 is Sunday, or use names)
│    │    │    └────────── month (1 - 12)
│    │    └─────────────── day of month (1 - 31)
│    └──────────────────── hour (0 - 23)
└───────────────────────── min (0 - 59)
  • /etc/cron.allow - If this file exists, then you must be listed therein (your username must be listed) in order to be allowed to use cron jobs.
  • /etc/cron.deny - If the cron.allow file does not exist but the /etc/cron.deny file does exist, then you must not be listed in the /etc/cron.deny file in order to use cron jobs.

dcron

Vanilla dcron supports asynchronous job processing. Just put it with @hourly, @daily, @weekly or @monthly with a jobname.

dcron has @daily, @hourly and so on. In fcron, you can use standard crontab entries and add &bootrun to the beginning of the line to repeat "missed" cronjobs.

cronie

Arch default cron as dcron wasn't maintained at the time.

dcron and fcron are not under active development, cronie is. cronie is small - 0.20MB installed. developed by Red Hat - it is not going anywhere and we have a guaranteed upgrade path. As far as I can tell cronie has no deps beyond glibc and pam. cronie has /etc/cron.d support and anacron support via an anacrontab config file. cronie extends the original vixie cron package so the syntax, core feature set, etc are stable. implements advanced security hooks as well and can integrate with SELINUX.

anacron

  • anacron is a computer program that performs periodic command scheduling which is traditionally done by cron, but without assuming that the system is running continuously. Thus, it can be used to control the execution of daily, weekly, and monthly jobs (or anything with a period of n days) on systems that don't run 24 hours a day.

fcron

Fcron’s own crontab system uses the fcrontab file for configuration information. The fcrontab syntax is similar but differs slightly from the classic Vixie/ISC Cron crontab notation. fcron lacks /etc/cron.d/ functionality.

Other

File watchers

inotify

  • inotify-tools is a C library and a set of command-line programs for Linux providing a simple interface to inotify.
  • inotifywatch - gather filesystem access statistics using inotify
  • inotail is a replacement for the 'tail' program found in the base installation of every Linux/UNIX system. It makes use of the inotify infrastructure in recent versions of the Linux kernel to speed up tailing files in the follow mode (the '-f' option). Standard tail polls the file every second by default while inotail listens to special events sent by the kernel through the inotify API to determine whether a file needs to be reread. Currently inotail is not fully compatible to neither POSIX or GNU tail but might be in the future.
  • entr - Run arbitrary commands when files change
  • - Guard::Shell - automatically runs shell commands when watched files are modified.
  • incron - inotify cron system. This program is an "inotify cron" system. It consists of a daemon and a table manipulator. You can use it a similar way as the regular cron. The difference is that the inotify cron handles filesystem events rather than time periods.
  • nodemon will watch the files in the directory that nodemon was started, and if they change, it will automatically restart your node application.

Other

watch -n 2 ls -la

chroot

Startup

  • e4rat ("Ext4 - Reducing Access Times") is a toolset to accelerate the boot process as well as application startups. Through physical file realloction e4rat eliminates both seek times and rotational delays. This leads to a high disk transfer rate. Placing files on disk in a sequentially ordered way allows to efficiently read-ahead files in parallel to the program startup.

ncurses, etc.

Web

Cute

See also GUI#Utils

hexdump -C /dev/urandom | pv -q -L 2k | grep "ca fe"

Windows

  • WINE

OS X

Storage

Device manager

udev

  • udev is a generic kernel device manager. It runs as a daemon on a Linux system and listens (via netlink socket) to uevents the kernel sends out if a new device is initialized or a device is removed from the system. Loads kernel modules by utilizing coding parallelism to provide a potential performance advantage versus loading these modules serially. The modules are therefore loaded asynchronously. The inherent disadvantage of this method is that udev does not always load modules in the same order on each boot. If the machine has multiple block devices, this may manifest itself in the form of device nodes changing designations randomly. For example, if the machine has two hard drives, /dev/sda may randomly become /dev/sdb. now part of systemd code tree [100]

Manages /dev. Rules in /etc/udev/rules.d/ and /usr/lib/udev/rules.d/

udevadm info -a -n [device path]

Other

Partitions

  • GNU Parted manipulates partition tables. This is useful for creating space for new operating systems, reorganizing disk usage, copying data on hard disks and disk imaging. The package contains a library, libparted, as well as well as a command-line frontend, parted, which can also be used in scripts.
  • http://gparted.sourceforge.net/

Storage virtualization

  • https://en.wikipedia.org/wiki/Logical_disk - a device that provides an area of usable storage capacity on one or more physical disk drive components in a computer system. Other terms that are used to mean the same thing are partition, logical volume, and in some cases a virtual disk (vdisk).
  • http://en.wikipedia.org/wiki/fstab - or file systems table, a system configuration file commonly found on Unix systems. On Linux, it is part of the util-linux package. The fstab file typically lists all available disks and disk partitions, and indicates how they are to be initialized or otherwise integrated into the overall system's file system. fstab is still used for basic system configuration, notably of a system's main hard drive and startup file system, but for other uses has been superseded in recent years by automatic mounting. The fstab file is most commonly used by the mount command, which reads the fstab file to determine which options should be used when mounting the specified device.

LVM

  • https://en.wikipedia.org/wiki/Logical_volume_management - provides a method of allocating space on mass-storage devices that is more flexible than conventional partitioning schemes. In particular, a volume manager can concatenate, stripe together or otherwise combine partitions into larger virtual ones that administrators can re-size or move, potentially without interrupting system use. Volume management represents just one of many forms of storage virtualization; its implementation takes place in a layer in the device-driver stack of an OS (as opposed to within storage devices or in a network).
  • https://en.wikipedia.org/wiki/Volume_group - the highest level abstraction used within the Logical Volume Manager (LVM). It gathers together a collection of Logical Volumes (LV) and Physical Volumes (PV) into one administrative unit.
  • LVM2 - refers to the userspace toolset that provide logical volume management facilities on linux. It is reasonably backwards-compatible with the original LVM toolset.

RAID

See also Hardware#RAID

Loopback

  • losetup - set up and control loop devices

Network

  • xNBD is yet another NBD (Network Block Device) server program, which works with the NBD client driver of Linux Kernel.

File systems

  • http://en.wikipedia.org/wiki/Virtual_file_system - or virtual filesystem switch is an abstraction layer on top of a more concrete file system. The purpose of a VFS is to allow client applications to access different types of concrete file systems in a uniform way. A VFS can, for example, be used to access local and network storage devices transparently without the client application noticing the difference. It can be used to bridge the differences in Windows, Mac OS and Unix filesystems, so that applications can access files on local file systems of those types without having to know what type of file system they are accessing.
  • wipefs - wipe a filesystem signature from a device

Ext2/3/4

kjournald is responsible for the journal of ext3 [103]

ZFS

"FreeBSD ZFS tuning guide wiki indicates you'll need about 5GB of ram per 1TB of saved disk space"

Btrfs

See also Backup#Btrfs

General

Subvolumes appear like directories. inode is different.

"Btrfs support is included in the linux package (as a module). Needs a reboot after installing before btrfs recognised. User space utilities are available in btrfs-progs. For multi-devices support (RAID like feature of btrfs) aka btrfs volume in early boot, you have to enable btrfs mkinitcpio hook (provided by mkinitcpio package) to be able to use, for example, a root btrfs volume. If the btrfs volume is a non-system volume, one only needs to set USEBTRFS="yes" in /etc/rc.conf. However, if you only use bare btrfs partition, such options are not needed."

"The btrfs scrub command reads redundant data and validates all the checksums, correcting any errors it finds along the way, using the checksum to determine which copy is the valid one. But with a single drive, how can it correct anything? The metadata - the file system overhead that is used to manage your data - is always stored in a redundant manner by default, even on a single drive. As a result, any corrupted metadata can be corrected, on the fly."

"EXT4 checksums its journal, which AFAIK will protect against errors caused by sync failures (ie. power failure during disk I/O). But it’s not going to protect against latent sector errors. To do that, you need checksumming on all the file data, along the lines of what ZFS or BTRFS provides."

A cross-subvolume copy patch has made it into 3.6_rc. This patch will allow cp --reflink across subvolumes, as long as the copy does not cross mount points.

Commands
mkfs.btrfs -L [label] /dev/[device]

btrfs filesystem df /media/[drive]
btrfs filesystem show

btrfs-debug-tree -R /dev/sdg
  show drive/subvolume infos, unmounted

btrfs subvolume create [<dest>/]
btrfs subvolume snapshot /mnt/btrfs /mnt/btrfs/snapshot_of_root
btrfs subvolume delete [<dest>/]

mount -o remount,compress=lzo /dev/sdXY /mnt/target

btrfs filesystem defragment /
btrfs device add /dev/sdc /mnt/btrfs

Cloning a file between subvolumes;

cp --reflink /mnt/MYFILES/myfile1 /mnt/MYFILES/myfile3

copy-on-write, without the ram requirement of zsf snapshots every 30 seconds, ability to mount from previous gen

GUI
  • btrfs-gui is a graphical user interface tool for inspecting and managing btrfs filesystems. It is capable of managing filesystems on the local machine, and filesystems on remote network-accessible machines. It requires root access to the machine to perform most of its tasks (but separates the root-access part from the GUI).
  • Snapper is a tool for managing btrfs snapshots. Apart from the obvious creation and deletion of snapshots it can compare snapshots and revert differences between snapshots. In simple terms, this allows users to view older versions of files and revert changes. Snapper is available as a command line interface tool and a YaST module. Both make use of the C++ library libsnapper which is also available to other programs.
Articles

XFS

Ceph

  • Ceph is a free software distributed file system. Ceph's main goals are to be POSIX-compatible, and completely distributed without a single point of failure. The data is seamlessly replicated, making it fault tolerant. Clients mount the file system using a Linux kernel client. On March 19, 2010, Linus Torvalds merged the Ceph client for Linux kernel 2.6.34 which was released on May 16, 2010. An older FUSE-based client is also available. The servers run as regular Unix daemons.

F2FS

bcache

FAT

NTFS

exFAT

Other

  • Opendedup Develops SDFS, a file-system that does inline deduplication.

Repair

Image files

Tools



Union mount

aufs

Overlayfs

Encryption

LUKS

EncFS

eCrypt

TrueCrypt

Rubberhose

Emulation

Files

Directory structure

See LSB, etc.

  • rmshit - Keep $HOME or other dir clean from unwanted tempfiles, configs and other crap you'll never use that's autocreated upon execution of bad behaving applications
  • BleachBit quickly frees disk space and tirelessly guards your privacy. Free cache, delete cookies, clear Internet history, shred temporary files, delete logs, and discard junk you didn't know was there. Designed for Linux and Windows systems, it wipes clean a thousand applications including Firefox, Internet Explorer, Adobe Flash, Google Chrome, Opera, Safari,and more. Beyond simply deleting files, BleachBit includes advanced features such as shredding files to prevent recovery, wiping free disk space to hide traces of files deleted by other applications, and vacuuming Firefox to make it faster.

File names

Commands

Block and partition names;

sd[a,b,etc]
  drive

sda[1,2,etc]
  partition of drive
lsblk
  # list information about block devices.
cat /proc/partitions

blkid
  # block device id (uuid, etc.) info

Mount

mount /dev/sdxY /some/directory

umount /some/directory

mount -o remount /
  remount partition after /etc/fstab change
mount -o loop example.img /home/you/dir

dd

  • dd - Copy a file, converting and formatting according to the options.
    • dd is a common Unix program whose primary purpose is the low-level copying and conversion of raw data.

if stands for input file and of for output file.

dd if=/dev/sda of=/mnt/sdb1/backup.img
  create a backup

dd if=/mnt/sdb1/backup.img of=/dev/sda
  restore a backup

dd if=/dev/sdb of=/dev/sdc
  clone a drive

dd if=/dev/sdb | ssh root@target "(cat >backup.img)"
  backup over network

dd if=/dev/cdrom of=cdimage.iso
  backup a cd
dd if=/dev/sr0 of=myCD.iso bs=2048 conv=noerror,sync
  create an ISO disk image from a CD-ROM.

dd if=/dev/sda2 of=/dev/sdb2 bs=4096 conv=noerror
  Clone one partition to another

dd if=/dev/ad0 of=/dev/ad1 bs=1M conv=noerror
  Clone a hard disk "ad0" to "ad1".

dd if=/dev/zero bs=1024 count=1000000 of=file_1GB

dd if=file_1GB of=/dev/null bs=64k
  drive benchmark test and analyze the sequential read and write performance for 1024 byte blocks

du (disk usage)

du -sh
  size of a folder
du -S
  size of files in a folder

du -aB1m|awk '$1 >= 100'
  everything over 100Mb
cd / | sudo du -khs *
  show root folder size

sudo du -a --max-depth=1 /usr/lib | sort -n -r | head -n 20
  size of program folders /usr/lib

du -sk ./* | sort -nr | awk 'BEGIN{ pref[1]="K"; pref[2]="M"; pref[3]="G";} { total = total + $1;
x = $1; y = 1;  while( x > 1024 ) { x = (x + 1023)/1024; y++; }
printf("%g%s\t%s\n",int(x*10)/10,pref[y],$2); } END { y = 1; while( total > 1024 )
{ total = (total + 1023)/1024; y++; } printf("Total: %g%s\n",int(total*10)/10,pref[y]); }'

df

df -h
  human readable

di

  • di is a disk information utility, displaying everything (and more) that your 'df' command does. It features the ability to display your disk usage in whatever format you prefer. It also checks the user and group quotas, so that the user sees the space available for their use, not the system wide disk space.

du

  • du - Summarize disk usage of each FILE, recursively for directories.

cdu

  • [117] cdu (for Color du) is a perl script which call du and display a pretty histogram with optional colors which allow to imediatly see the directories which take disk space.

dfc

ncdu

  • ncdu - ncurses disk usage
ncdu / --exclude /home --exclude /media --exclude /run/media
  check everything apart from home and external drives

ncdu / --exclude /home --exclude /media --exclude /run/media
  check everything apart from external drives
ncdu / --exclude /home --exclude /media --exclude /run/media --exclude /boot
--exclude /tmp --exclude /dev --exclude /proc
  just the root partition

Baobab

Other

  • Filelight creates an interactive map of concentric, segmented rings that help visualise disk usage on your computer.

Files and directories

touch

touch filename
  create a file or update timestamp

ls

ls
  list in row
ls -l
  long list

ls *
  files in directory and immediate subdiretories

just names;

ls -m1
  -m fill width with a comma separated list of entries ??
ls --format single-column
  column of names only
ls -l | grep - | awk '{print $9}'
  using awk to show the 9th word (name). strips colour.
ls -l | cut -f9 -s -d" "
  using cut to cut from the 9th word, using space as a delimiter. strips colour.
ls | cat
  neat

ls -a
  show hidden files
ls  -A
  show hidden files, exclude . and ..

cd

cd change/directory/path

mv

mv position1 ~/position2
  basic move

mkdir

mkdir directory
mkdir directory -p
  no error if existing, make parent directories as needed

ln

symlink

ln -s {target-filename}
ln -s {target-filename} {symbolic-filename}
  create soft link

Other

stat .
  display file or file system status
stat -c "%n %a" * | column -t
  directory files + octal
v def conf       =>     vim /some/awkward/path/to/type/default.conf
j abc            =>     cd /hell/of/a/awkward/path/to/get/to/abcdef
m movie          =>     mplayer /whatever/whatever/whatever/awesome_movie.mp4
o eng paper      =>     xdg-open /you/dont/remember/where/english_paper.pdf
vim `f rc lo`    =>     vim /etc/rc.local
vim `f rc conf`  =>     vim /etc/rc.conf

alias defaults;

alias a='fasd -a'        # any
alias s='fasd -si'       # show / search / select
alias d='fasd -d'        # directory
alias f='fasd -f'        # file
alias sd='fasd -sid'     # interactive directory selection
alias sf='fasd -sif'     # interactive file selection
alias z='fasd_cd -d'     # cd, same functionality as j in autojump
alias zz='fasd_cd -d -i' # cd with interactive selection
  • Desk - Lightweight workspace manager for the shell. Desk makes it easy to flip back and forth between different project contexts in your favorite shell. [123]

Copying files

cp - copy files and directories
scp -P 2264 foobar.txt your_username@remotehost.edu:/some/remote/directory
scp -rP 2264 folder your_username@remotehost.edu:/some/remote/directory
  • dcp is a distributed file copy program that automatically distributes and dynamically balances work equally across nodes in a large distributed system without centralized state. http://filecopy.org/

Removing files

rm file

rm -rf directory
find * -maxdepth 0 -name 'keepthis' -prune -o -exec rm -rf '{}' ';'
  remove all but keepthis [125]

Viewing files

<filename

cat

cat filename
  output file to screen
cat -n filename
  output file to screen w/ line numbers
cat filename1 filename2
  output two files (concatinate)
cat filename1 > filename2
  overwrite filename2 with filename1
cat filename1 >> filename2
  append filename1 to filename2
cat filename{1,2} > filename2
  add filename1 and filename2 together into filename3

head

head filename
  top 10 lines of file
head -23 filename
  top 23 lines of file

tail

tail filename
  bottom 10 lines of file
tail -23 filename
  bottom 23 lines of file

Other

sed -n 20,30p filename
  print lines 20..30 of file [126]
  • MultiTail allows you to monitor logfiles and command output in multiple windows in a terminal, colorize, filter and merge.

File pagers

more

more is a filter for paging through text one screenful at a time. This version is especially primitive. Users should realize that less(1) provides more(1) emulation plus extensive enhancements.

less

less is an improvement on more and a funny name.

most

vimpager

other


Managing files

  • http://ranger.nongnu.org/ - ranger is a console file manager with VI key bindings. It provides a minimalistic and nice curses interface with a view on the directory hierarchy. It ships with "rifle", a file launcher that is good at automatically finding out which program to use for what file type.

setup ~/.config/ranger/ with defaults;

ranger --copy-config=all
  • Vifm is an ncurses based file manager with vi like keybindings/modes/options/commands/configuration, which also borrows some useful ideas from mutt.

If you use vi, Vifm gives you complete keyboard control over your files without having to learn a new set of commands.

  • https://github.com/D630/fzf-fs - acts like a very simple and configurable file browser/navigator for the command line by taking advantage of the general-purpose fuzzy finder fzf. Although coming without Miller columns, fzf-fs is inspired by tools like lscd and deer, which both follow the example set by ranger. [129]

Finding files

ls | sort -f | uniq -i -d
  # list duplicate files taking upper/lower case into account [130]

find

find /usr/share -name README
find ~/Journalism -name '*.txt'
find ~/Programming -path '*/src/*.c'
find ~/Journalism -name '*.txt' -exec cat {} ;
  exec command on result path (aliases don't work in exec argument) 
find ~/Images/Screenshots -size +500k -iname '*.jpg'
find ~/Journalism -name '*.txt' -print0 | xargs -0 cat   (faster than above)

find / -group [group]
find / -user [user]
find . -mtime -[n]
  File's data was last modified n*24 hours ago

find . -mtime +5 -exec rm {} \;
  remove files older than 5 days
find . -type f -print0 | xargs -0 stat -c "%y %s %n"
find . -type f -links +1
  list hard links

locate

locate fileordirectory

locate /

locate / | xargs -i echo 'test -f "{}" && echo "{}"' | sh
  # only files

locate / | xargs -i echo 'test -f "{}" && echo "{}"' | sh
  # only directories [131]

"Although in other distros locate and updatedb are in the findutils package, they are no longer present in Arch's package. To use it, install the mlocate package. mlocate is a newer implementation of the tool, but is used in exactly the same way."

mlocate is a locate/updatedb implementation. The 'm' stands for "merging": updatedb reuses the existing database to avoid rereading most of the file system, which makes updatedb faster and does not trash the system caches as much. The locate(1) utility is intended to be completely compatible to slocate. It also attempts to be compatible to GNU locate, when it does not conflict with slocate compatibility.


Before locate can be used, the database will need to be created. To do this, simply run updatedb as root.

sudo updatedb
  # creates/updates a db file of paths that is queried by locate

/etc/updatedb.conf

PRUNE_BIND_MOUNTS = "yes"
PRUNEFS = "9p afs anon_inodefs auto autofs bdev binfmt_misc cgroup cifs coda configfs cpuset cramfs debugfs devpts devtmpfs ecryptfs exofs ftpfs fuse fuse.encfs fuse.sshfs fusectl gfs gfs2 hugetlbfs inotifyfs iso9660 jffs2 lustre mqueue ncpfs nfs nfs4 nfsd pipefs proc ramfs rootfs rpc_pipefs securityfs selinuxfs sfs shfs smbfs sockfs sshfs sysfs tmpfs ubifs udf usbfs vboxsf"
PRUNENAMES = ".git .hg .svn"
PRUNEPATHS = "/afs /media /mnt /net /sfs /tmp /udev /var/cache /var/lib/pacman/local /var/lock /var/run /var/spool /var/tmp"

/var/lib/mlocate/mlocate.db


strings /var/lib/mlocate/mlocate.db | grep -E '^/.*config'
  # query db directly, needs sudo or sudoers or acl

to sort

Editing files

See Vim, Emacs

echo "hello" >> greetings.txt

cat temp.txt >> data.txt
  To append the contents of the file temp.txt to file data.txt

date >> dates.txt
  To append the current date/time timestamp to the file dates.txt

Finding file content

grep
grep "stuff" sqldump.sql | fold -w 200 | grep -C 1 "stuff"

The first grep gets the (mile-wide) line that has the match, then fold will split the mile-wide line into 200 char long lines, and "grep -C 1" will show only the one 200 char wide line where the match is + 1 line of context before and after. [133]



  • sift - grep on steroids
[134]


  • https://en.wikipedia.org/wiki/strings_(Unix) - a program in Unix-like operating systems that finds and prints text strings embedded in binary files such as executables. It can be used on object files and core dumps. Strings are recognized by looking for sequences of at least 4 (by default) printable characters terminating in a NUL character (that is, null-terminated strings). Some implementations provide options for determining what is recognized as a printable character, which is useful for finding non-ASCII and wide character text. Common usage includes piping its output to grep and fold or redirecting the output to a file.

Archiving

GNU paxutils

cpio

The cpio archive format has several basic limitations: It does not store

    user and group names, only	numbers.  As a result, it cannot be reliably
    used to transfer files between systems with dissimilar user and group
    numbering.
tar
-z: Compress archive using gzip program
-c: Create archive
-v: Verbose i.e display progress while creating archive
-f: Archive File name
tar -xvf foo.tar
  verbosely extract foo.tar
tar -xzf foo.tar.gz
  extract gzipped foo.tar.gz

tar -xjf foo.tar.bz2 -C bar/
  extract bzipped foo.tar.bz2 after changing directory to bar
tar -xzf foo.tar.gz blah.txt
  extract the file blah.txt from foo.tar.gz
tar -zcvf archive-name.tar.gz directory-name

tar -cjf foo.tar.bz2 bar/
  create bzipped tar archive of the directory bar called foo.tar.bz2
pax

Compression




zip

zip
unzip

unzip -l archive.zip
  # list files in archive

for z in *.zip; do unzip $z; done
  # unzip all in folder, overwrite files automatically

gzip

  • gzip, gunzip, zcat - compress or expand files

bzip2

7z

  • 7-Zip is a file archiver with the highest compression ratio. The program supports 7z (that implements LZMA compression algorithm), ZIP, CAB, ARJ, GZIP, BZIP2, TAR, CPIO, RPM and DEB formats. Compression ratio in the new 7z format is 30-50% better than ratio in ZIP format.
    • p7zip is a port of 7za.exe for POSIX systems like Unix (Linux, Solaris, OpenBSD, FreeBSD, Cygwin, AIX, ...), MacOS X and also for BeOS and Amiga. 7za.exe is the command line version of 7-zip, see http://www.7-zip.org/. 7-Zip is a file archiver with highest compression ratio.
    • man z7 (p7zip)
    • p7zip-light in AUR
7z x filename
  extract archive with directories

7z a myzip ./MyFolder/*
  add a folder to an archive

xz

Other

Generic

  • http://www.nongnu.org/atool/ atool is a script for managing file archives of various types (tar, tar+gzip, zip etc). The main command is aunpack which extracts files from an archive. Did you ever extract files from an archive, not checking whether the files were located in a subdirectory or in the top directory of the archive, resulting in files scattered all over the place? aunpack overcomes this problem by first extracting to a new directory. If there was only a single file in the archive, that file is moved to the original directory. aunpack also prevents local files from being overwritten by mistake.
  • dtrx stands for “Do The Right Extraction.” It's a tool for Unix-like systems that takes all the hassle out of extracting archives.
# Extract Files
extract() {
 if [ -f $1 ] ; then
     case $1 in
         *.tar.bz2)   tar xvjf $1    ;;
         *.tar.gz)    tar xvzf $1    ;;
         *.tar.xz)    tar xvJf $1    ;;
         *.bz2)       bunzip2 $1     ;;
         *.rar)       unrar x $1     ;;
         *.gz)        gunzip $1      ;;
         *.tar)       tar xvf $1     ;;
         *.tbz2)      tar xvjf $1    ;;
         *.tgz)       tar xvzf $1    ;;
         *.zip)       unzip $1       ;;
         *.Z)         uncompress $1  ;;
         *.7z)        7z x $1        ;;
         *.xz)        unxz $1        ;;
         *.exe)       cabextract $1  ;;
         *)           echo "\`$1': unrecognized file compression" ;;
     esac
 else
     echo "\`$1' is not a valid file"
 fi
}

Pagers

File types


xdg-mime default Thunar.desktop inode/directory
  to make Thunar the default file-browser
xdg-mime default xpdf.desktop application/pdf
  to use xpdf as the default PDF viewer
/usr/share/applications/defaults.list      (global)
~/.local/share/applications/defaults.list  (per user, overrides global)

[Default Applications]
mimetype=desktopfile1;desktopfile2;...;desktopfileN

Users

Files

Commands

useradd username
  create user with defaults (no password)

useradd -D
  show defaults that will be used

useradd -m -g [initial_group] -G [additional_groups] -s [login_shell] [username]
  -m - add home dir
  -d - non-default home dir path
  -c comment
  -e 2006-06-30 - expires

useradd -m -g users -G audio,lp,optical,storage,video,wheel,games,power,scanner -s /bin/bash username
adduser
  interactive tool
passwd
  # set password of current account

passwd username
  # set password of username

passwd -S username
  # see password status of username
userdel username
  # remove user

userdel -r username
  # remove user plus their home folder, mail spool
cat /etc/passwd | cut -d: -f 1,3,6 | grep "[5-9][0-9][0-9]"
  # List users on a system [140]
groups [username]
  # show users groups

Groups

groupadd [group]
  add group

gpasswd -a [user] [group]
  add user to group
gpasswd -d [user] [group]
  delete user from group

User must relogin for new group to take effect.

id [username]
  show user id and their groups ids
last
  lastb - show listing of last logged in users

Permissions

stat -c '%A %a %n' *
  list permissions in octal

chmod

chmod
  change file mode bits

or

Users;

  • u - the user who owns the file.
  • g - other users who are in the file's group.
  • o - all other users.
  • a - all users; the same as ugo.

Operation;

  • + - to add the permissions to whatever permissions the users already have for the file.
  • - - to remove the permissions from whatever permissions the users already have for the file.
  • = - to make the permissions the only permissions that the users have for the file.

Permissions; r - the permission the users have to read the file. w - the permission the users have to write to the file. x - the permission the users have to execute the file, or search it if it is a directory.

chown

For changing user and group ownership of file

chown -R user:group .
  change all and subitems [142]

su / sudo

Run things as root or another user. There are a number of methods to do higher privilege things, with differing resultant permissions. To refind good forum threads..

sudo -s
  open a new shell as root, but with your original user's environment variables

sudo -i
  open a new shell as if root had logged in; root's environment variables, root's .bash_profile, etc.

sudo su -
  same as above

snippet to source...

sudo sh -c "echo 'something' >> /etc/privilegedfile"

sudoers

visudo
  command to safely add sudoers rights using nano
user ALL=(ALL) ALL

Access Control Lists

Partition must be mounted with acl option. "Operation not supported" from setfacl means this is not so.

mount -o remount,acl /

Set in /etc/fstab for persistence across boots.

Opening a directory requires execute permission.

setfacl -m "u:username:permissions"
setfacl -m "u:uid:permissions"
  add permissions for user

setfacl -m "g:groupname:permissions"
setfacl -m "g:gid:permissions"
  add permissions for group

setfacl -m "u:user:rwx" file
  add read, write, execure perms for user for file
setfacl -Rm "u:user:rw" /dir
  add recursive read, write perms for user for dir
setfacl -Rdm "u:user:rw" /dir
  add recursive read, write perms for user for dir and make them default for future changes
getfacl ./
  shows access control list for directory or file

Session permission management

For non-root mount permissions, etc, i.e. to make USB drives, etc. accessible too non-root users.

PAM

Configuration file(s):

/etc/pam.conf

or

/etc/pam.d/*

Packages

  • http://www.freedesktop.org/software/PackageKit/ PackageKit is a system designed to make installing and updating software on your computer easier. The primary design goal is to unify all the software graphical tools used in different distributions, and use some of the latest technology like PolicyKit. The actual nuts-and-bolts distro tool (yum, apt, conary, etc) is used by PackageKit using compiled and scripted helpers. PackageKit isn't meant to replace these tools, instead providing a common set of abstractions that can be used by standard GUI and text mode package managers. PackageKit itself is a system activated daemon called packagekitd. Being system activated means that it's only being run when the user is using a text mode or graphical tool, and quits when it's no longer being used. This means we don't delay the boot sequence or session startup and don't consume memory when not being used.


  • pkgs.org - Linux Software Catalog and Packages Search. RPM, DEB, TGZ, TXZ packages from well-known repositories of the Archlinux, Fedora, CentOS, RHEL, Scientific Linux, Debian, Ubuntu, openSUSE, Mandriva, Mageia and Slackware distributions; packages search by name, filename, summary, description, requires, provides, files and directories; pPowerful packages browser (summary, description, files, requires, provides, changelog, etc.)

Basic

Debian

including ubuntu, linux mint and others

dpkg --get-selections > installed-software
  create list of installed software

dpkg --set-selections < installed-software
dselect
  reinstall from list
dpkg --list
  to check
Apt
apt-get install package
etc.  
apt-cache search 'web server'
etc.
apt-find show python
 show files installed by package

GUI

Arch

pacman

pacman -Sy [package]
  install [package]

pacman -Syu
  upgrade system
pacman -Qo [file]
  check what package owns a file

pacman -Qi
  list installed programmes
pacman -Qqtd
  check whether there are any orphaned packages

pacman -Rsn packagename
  remove orphaned packages

pacman -Rc packagename
  remove package and deps ('cascade')
pacman -Rdd qt
pacman -Syuuu
  when qt4 replaced qt, but qt had deps.
  • cacheclean - Cleans up pacman packages. Users selects how many old versions to keep.
cacheclean {-p} {-v} <# of copies to keep>
# of copies to keep - (required) how many generations of each package.
-p - (optional) preview what would be deleted.
-v - (optional) show deleted packages.

for i in `pacman -Qdt | awk '{print $1}'`; do pacman -R $i ; done

Helpers

yaourt packagename
  # search with interactive install

yaourt --noconfirm
  # no edit package build and no confirm prompt to install

yaourt -Syu --noconfirm
  # update all

yaourt -Syu --aur --noconfirm
  # update also aur

Packages

 makepkg -g >> PKGBUILD && makepkg

"ERROR: PKGBUILD contains CRLF characters and cannot be sourced." Solution:

sed -i 's/^M//' PKGBUILD

[ctrl+v][ctrl+m] for the ^M symbol.

pacman -Qdt
  list all orphans
pacman -Rsn $(pacman -Qdtq)
  remove them all

Repos

Other

  • gist: unused.sh - finds unused package on your Arch Linux box

RPM

7z also extracts rpm to cpio, and cpio to files.

Yum

  • Yum is an automatic updater and package installer/remover for rpm systems. It automatically computes dependencies and figures out what things should occur to install packages. It makes it easier to maintain groups of machines without having to manually update each one using rpm. Yum has a plugin interface for adding simple features. Yum can also be used from other python programs via its module inteface.
dnf

Gentoo Portage

Guix

Ports Collection

Other

Time

echo "Today's date is `date -I`"
date
  Local datetime
date -d @1337000000
  Convert unixtime to datetime
date -ud
  Convert unixtime to UTC datetime
ntpdate pool.ntp.org && hwclock --systohc && hwclock --adjust
  Synchronize both your system clock and hardware clock and calculate/adjust time drift.
  Do not run this command if you already have ntpd running! [146]

Swap

swapon -s
free -m

Suspension, hibernation, power

  • https://wiki.archlinux.org/index.php/Pm-utils - a collection of shell scripts that wrap the kernel mode suspend/resume with the various hacks. These hacks are needed to work around bugs in drivers and subsystems that are not yet aware of suspend. It is easily extensible by putting custom hooks into a directory, which can either be done by the system administrator or those hooks can be part of a package, especially if this package needs special attention during a system suspend or power state transition.
sudo pm-suspend

/etc/sudoers

username  ALL = NOPASSWD: /usr/sbin/pm-hibernate
username  ALL = NOPASSWD: /usr/sbin/pm-suspend
%group   ALL = NOPASSWD: /usr/sbin/pm-hibernate
%group   ALL = NOPASSWD: /usr/sbin/pm-suspend

Performance

  • http://openbenchmarking.org/ OpenBenchmarking.org is an open, collaborative testing platform designed by Phoronix Media and the developers behind the Phoronix Test Suite, the most comprehensive benchmarking platform for Linux and other operating systems. OpenBenchmarking.org makes the Phoronix Test Suite an even more extensible platform for conducting automated tests with complete integration into Phoronix Test Suite 3.0-Iveland as well as within Phoromatic, an online test remote management system designed for managing test farms within enterprise environments.

Other software

Printing

CUPS

Printing system.


lp filename
lpr filename

lpstat -p -d

lp -d printer filename
lpr -P printer filename
lpoptions -d printer
program | lp
program | lp -d printer
program | lpr
program | lpr -P printer

Inferno