Docker Compose
is a powerful tool for managing multi-container applications. It simplifies the process of defining,
running, and scaling your entire application stack by using a single YAML configuration file.
By defining your application's services, networks, and volumes in a Compose file, you can easily
create and manage your entire environment with a single command. This streamlines development,
testing, and deployment, making it easier to collaborate and iterate on your projects.
Docker Compose File
A Compose file, typically named compose.yaml or docker-compose.yaml, is placed in your
working directory. While compose.yaml is the preferred format, Compose supports docker-compose.yaml for
backward compatibility. If both files exist, Compose will prioritize compose.yaml.
To start all the services defined in your compose.yaml file
# docker compose up
To stop and remove the running services.
# docker compose down
If you want to monitor the output of your running containers and debug issues, you can view the
logs with,
# docker compose logs
To lists all the services along with their current status
# docker compose ps
Execute command in dry run mode
--dry-run
Specify an alternate environment file
--env-file
Multiple Compose files
-f, --file
Project name
-p, --project-name
Sample docker compose file
services:frontend:image:example/webappports:- "443:8043"networks:- front-tier- back-tierconfigs:- httpd-configsecrets:- server-certificatebackend:image:example/databasevolumes:- db-data:/etc/datanetworks:- back-tiervolumes:db-data:driver:flockerdriver_opts:size:"10GiB"configs:httpd-config:external:truesecrets:server-certificate:external:truenetworks:# The presence of these objects is sufficient to define themfront-tier:{}back-tier:{}
Usage
multiple compose files
# docker compose -f docker-compose.yml -f docker-compose.admin.yml run backup_db
Docker is a platform that allows you to build, ship, and run applications in containers.
It's essentially a tool that packages your application and its dependencies into a standardized unit
called a container. This container can then be run consistently across different environments, such as
development, testing, and production.
Key benefits of using Docker:
Portability: Containers can be easily moved between different machines or environments without
changes to the application.
Isolation: Each container runs in its own isolated environment, preventing conflicts between
different applications.
Efficiency: Docker containers are lightweight and share the host operating system's kernel,
making them more efficient than traditional virtual machines. Scalability: Docker can be used to easily
scale applications up or down based on demand.
Network grep or 'ngrep', is a tool which performs most of GNU grep's common
features,
applying them to the network layer.
As per Linux man page, ngrep is a pcap-aware tool that will allow you to specify extended regular
expressions to match against data payloads of packets.
It currently recognizes TCP, UDP and ICMP across Ethernet, PPP, SLIP, FDDI and null interfaces,
and understands bpf filter logic in the same fashion as more common packet sniffing tools,
such as tcpdump and snoop.
Basic Usage
$ ngrep 'Linux' -q
The above command will filter the packets which contain the word 'Linux'. The option 'q' as per man
page 'Be quiet; don't output any information other than packet headers and their payloads (if
relevant)'.
It is good to have -q every time.
We can add more searching options which we will be using with grep command. Few examples are below.
$ ngrep -i 'Linux' -q // case-insensitive
$ ngrep -iv 'Linux' -q // case-insensitive and inverse match
$ ngrep -wi 'Linux' -q // case-insensitive exact word 'linux'
$ ngrep -W byline -q
The 'byline' option is a nice one like 'q' which will print the detail in a good format which is
easy to
read. Other option available are 'normal|single|none'. As per my opinion, the most useful option
is byline.
Options
$ ngrep -W normal -q
$ ngrep -W single -q
$ ngrep -W none -q
Commonly Used 'bpf' filter options
$ ngrep -q 'req' 'host 192.168' // matches all headers containing the string 'req' sent to or from the
ip address starting with 192.168
$ ngrep -q 'req' 'dst host 192.168' // will do same as above, but instead match a destination host
$ ngrep -q 'req' 'src host 192.168' // will do same as above, but instead match a source host
[root@localhost]# [root@localhost]# ngrep 'REGISTRATION' port 1521 -q -W byline -Tinterface: eth1 (10.10.56.128/255.255.255.128)filter: (ip or ip6) and ( port 1521 )match: REGISTRATIONT +58.501281 10.10.56.233:39990 -> 10.10.56.231:1521 [AP]...........i.......^...).......................................select CREATED_AT, LAST_LOGGED_AT FROM REGISTRATION_TABLE WHERE APP_ID = :1 AND STATUS = 'ACT' ORDER BY CREATED_AT DESC .................... .............. .......... 3ceb10b1d80acc72c0f62681e0045859
Linux Networking Commands
March 10, 2018
ping
The ping command sends echo requests to the host you specify on the command line, and lists the responses received their round trip time. ping will send echo request indefinitely until you stop by ctrl+c (SIGINT). You can also add -c option to send a fixed number of requests.
telnet
The telnet command is used for interactive communication with another host using the TELNET protocol. It begins in command mode, where it prints a telnet prompt (“telnet> “). If telnet is invoked with a host argument, it performs an open command implicitly; see the description below.
netstat
Displays contents of /proc/net files. It works with the Linux Network Subsystem, it will tell you what the status of ports are ie. open, closed, waiting, masquerade connections and few other details.
tcpdump
This will capture packets off a network interface and interprets them for you. It understands all basic internet protocols, and can be used to save entire packets for later inspection.
usage examples : # tcpdump port 22 # tcpdump dst 192.168.65.133 and tcp -vv
hostname
Tells the user the host name of the computer they are logged into.
$ hostname
traceroute
traceroute will show the route of a packet. It attempts to list the series of hosts through which your packets travel on their way to a given destination.
It is a poweful network exploration tool and security scanner, nmap is a very advanced network tool used to query machines (local or remote) as to whether they are up and what ports are open on these machines.
# nmap -v -A scanme.nmap.org
iftop
iftop – display bandwidth usage on an interface by host
ifconfig
ifconfig is used to configure the kernel-resident network interfaces. It is used at boot time to set up interfaces as necessary. After that, it is usually only needed when debugging or when system tuning is needed.
If no arguments are given, ifconfig displays the status of the currently active interfaces. If a single interface argument is given, it displays the status of the given interface only; if a single ‘-a’ argument is given, it displays the status of all interfaces, even those that are down. Otherwise, it configures an interface.
iwconfig
iwconfig is similar to ifconfig , but is dedicated to the wireless interfaces.
ifup/ifdown/ifquery
ifup – bring a network interface up.
ifdown – take a network interface down
ifquery – parse interface configuration
usage examples
$ ifdown eth0
$ ifup eth0
$ ifquery eth0
host
Performs a simple lookup of an internet address (using the Domain Name System, DNS).
dig
The “domain information groper” tool. More advanced then host. If you give a hostname as an argument to output information about that host, including it’s IP address, hostname and various other information.
To find the host name for a given IP address (ie a reverse lookup), use dig with the `-x’ option. dig takes a huge number of options (at the point of being too many), refer to the manual page for more information.
whois
whois is used to look up the contact information from the “whois” databases, the servers are only likely to hold major sites.
wget
(GNU Web get) used to download files from the World Wide Web. To archive a single web-site, use the -m or –mirror (mirror) option. Use the -nc (no clobber) option to stop wget from overwriting a file if you already have it. Use the ‘-c’ or’ –continue’ option to continue a file that was unfinished by wget or another program.
Simple usage example: wget url_for_file This would simply get a file from a site.
wget has many more options refer to the examples section of the manual page, this tool is very well documented.
curl
curl is another remote downloader. This remote downloader is designed to work without user interaction and supports a variety of protocols, can upload/download and has a large number of tricks/work-arounds for various things. It can access dictionary servers (dict), ldap servers, ftp, http, gopher, see the manual page for full details.
To access the full manual (which is huge) for this command type:
curl -M For general usage you can use it like wget. You can also login using a user name by using the -u option and typing your username and password like this:
ssh (SSH client) is a program for logging into a remote machine and for executing commands on a remote machine. It is intended to provide secure encrypted communications between two untrusted hosts over an insecure network. X11 connections, arbitrary TCP ports and UNIX-domain sockets can also be forwarded over the secure channel.
scp
scp copies files between hosts on a network. It uses ssh(1) for data transfer, and uses the same authentication and provides the same security as ssh(1). scp will ask for passwords or passphrases if they are needed for authentication.
sftp
sftp is an interactive file transfer program, similar to ftp(1), which performs all operations over an encrypted ssh(1) transport. It may also use many features of ssh, such as public key authentication and compression. sftp connects and logs into the specified host, then enters an interactive command mode.
Grep Command
March 3, 2018
Grep is a powerful tool which allow you to search a word through command line.In this post we
can see some grep command usage examples. Grep command is basically for searching a string
in a given file.
By default, grep prints the matching lines.
Syntax
$ grep [options] pattern [file ...]
Add ‘–color=auto‘ option to get colored output.
Grep command options.
Case-insensitive searching
$ grep -i 'string_to_search' file_name
Searching with wild character
$ grep -i'search key.*' file_name
the above command will list every line starting with search key what ever after this is not a
problem.
Finding number of matches in file for ‘search key’
$ grep -c 'search key' file_name
add ‘w’ option to find exact match.
$ grep -wc ‘search key’ file_name
Get the line number with grep out put
$ grep -n 'search key' file_name
Show lines after the match
$ grep -A <N> 'search key' file_name
the above command will show ‘N’ lines after the match.
Show lines before the match
$ grep -B <N> 'search key' file_name
the above command will show ‘N’ lines before the match.
Show lines around the match
$ grep -C <N> 'search key' file_name
the above command will show ‘N’ lines around the match.
Basic Linux Commands
Feb 28, 2018
ls
List the files in the directory. You can add ‘-a’
option to show all files including hidden files. Adding the directory path will list files
in the mentioned directory.
Some commonly used options for ls.
$ ls -l
$ ls -la
$ ls -lrt (reverse sort with the time of change in the files)
$ ls -lrth ('h' options to list files in print sizes in human-readable format)
cd
Change directory, move to another directory. cd command without any directory name will
redirect to the home directory.
pwd
Print working directory.
rm
delete files and directories, directories need to add ‘-r’ option, like rm -r
folder_name
mkdir
the command to create a directory.
touch
The touch command is used to create a file.
cp
cp command is used to copy a file or directory. To copy files from a directory recursively
need to add ‘-r’ option. We need to specify the source and destination to the
command, like cp 1.txt 2.txt
mv
mv command is used to move files or directories through the command line. We can use the same
command to rename a file.
echo
echo command simply returns whatever it is given to it, like an echo. It is helpful in some
data operations with the help of redirection operation.
cat
cat command is used to display contents of a file. It is very useful for small files.
man & –help
To know more details about a command and how to use it, use the man command.
It shows the manual pages of the command. eg ‘man cp’ will show the details of
the cp command.
Passing the help argument to a command will give similar information.
$ man cp
$ cp -help
Compiling Linux Kernel from Source code
November 12, 2014
Compiling Linux kernel source code, it will be a nice thing to compile a
Linux kernel source code to build your own custom kernel.
It can be done in very few steps as described below.
This helps to configure your kernel in command line menu based interface.
Step 4: Build the kernel configuration file
There are three ways to build a Linux kernel.
1. make oldconfig
2. make menuconfig
3. make qconfig/gconfig/xconfig
We use make menuconfig
now run make menuconfig command.
In the open window you can configure the options for file system, network, input output
devices, and so on ...
You may not know what to select, google it, and find out what they are, in every possible
way. And save the configuration file and rename it as “.config”. Or You can copy
the current kernel configuration to present working directory, and load it in the menu and
edit if you need, save it and rename as “.config”.
Step 5: Compile the kernel
Run makecommand and wait
For compiling kernel modules run make modules
command.
It will take much time depending upon your system. After that you can test your kernel using
qemu.
In terminal : qemu-system-x86_64 -kernel directory/linux-3.16.3/arch/x86_64/boot/bzImage
If every thing is in order you can see the kernel booting in qemu and our kernel doesn't
have an initial file system, you can see kernel panic message.
it is advisable to change to your directory. That’s
all and be patient while compiling kernel, it will take a while!!