Securely Copy Files (scp) tool to copying files by ssh

A very good tool for securely copying files via the ssh protocol between machines is scp. It allows you to transfer files to the target machine as well as download from a given source. The tool is usually built into the system so it works on many distributions. Below I will present how you can send and download files. For correct file transfer, running ssh service is required, because it is the basis of scp operation. Of course, when using the tool, you can specify the port as the parameter, provided that it has been changed. The standard port used by the ssh daemon is 22. 

In Linux, scp (Secure Copy) is a command-line utility used for securely transferring files between local and remote systems. It is a secure alternative to cp, which is not secure when transferring files over a network.

The scp command is commonly used for copying files to or from a remote server. It uses the SSH protocol to securely transfer files and provides the same level of security as SSH. The syntax of the scp command is as follows:

Here, [source] is the file or directory you want to copy, and [destination] is the location where you want to copy the file or directory.

Some common options used with the scp command are:

  • -r: Copies directories recursively
  • -P: Specifies the port number to use for the SSH connection
  • -i: Specifies the path to the identity file used for authentication

For example, to copy a file named file.txt from a remote server to the local machine, you would use the following command:

This command will copy the file from the remote server to the local machine at the specified directory.

Similarly, to copy a directory named dir from the local machine to a remote server, you would use the following command:

This command will copy the directory and its contents from the local machine to the remote server at the specified directory.

Let’s start by creating an example file that we will transfer: 

in the next step, let’s move on to uploading the file. In my case, the port from ssh has been changed to 2222:

The first time you connect, you will be asked for a fingerprint. 
As you can see, the file has been sent correctly. 

Instead of the sign at the end of ‘~‘ we can specify where the target file should be placed (/tmp/example-path): 

There are many combinations, you can send, for example, all files containing the ending (*.tar.gz) to the user’s home directory, which is just symbolized by ‘~‘: 

An interesting parameter is the ‘-r‘ in scp where we can transfer entire folders, example using copying a folder from local machine to remote machine: 

OK, after the file has been successfully sent to the target machine, let’s delete the local file we created above and try to download it back: 

Next, let’s move on to downloading the file from the remote server to the local machine: 

Above I gave an example of how to send an entire folder from a local machine to a remote machine. The other way around, of course, we can also do it. To download a remote folder to a local machine, use the ‘-r‘ parameter:

The scp utility has more parameters, you can get them by reading the man page: 

It is worth paying attention to the ‘-l‘ parameter where we can set the limit of transferred files. This is useful when transferring larger files so as not to overload your connection. 

If you are tired of constantly entering your password, I encourage you to read how you can connect to ssh without providing a password. Then copying files using scp will become more: generate ssh key pair in linux.

In my opinion, scp is good for transferring files quickly one time. However, as often you exchange files between machines a more convenient way is to use sshfs as described here: sshfs great tool to mount remote file system.

sshfs great tool to mount remote file system

SSHFS (SSH File System) is a secure file transfer system that enables users to remotely access and manage files on a remote server over an encrypted SSH (Secure Shell) connection. SSHFS uses the SSH protocol to establish a secure connection between the local and remote systems, which enables users to securely transfer files between the two systems.

To use SSHFS, the user needs to have SSHFS installed on their local system as well as the remote system that they want to connect to. Once SSHFS is installed, the user can mount the remote system as a local directory on their system, and access the remote files as if they were stored locally.

SSHFS provides a secure and convenient way to access and manage files on remote systems, without the need for additional software or complicated configuration. It also enables users to access files on remote systems using standard file operations, such as copying, moving, and deleting, making it a simple and effective way to manage files on remote systems.

SSH Filesystem (sshfs) is a very useful tool for remotely transferring files over the ssh protocol. An additional advantage of the whole is encryption. This is a convenient way to mount a remote folder to delete files. Below I will try to briefly introduce how to install sshfs and how to mount the folder remotely. Additionally, we will make an entry in /etc/fstab at the end, so that the resource itself is mounted after restarting the system. Let’s move on to installing the tool itself:

In this case, as you can see, the installation was done on kali linuxe, however the procedure is the same on debian.

Let’s move on to the file mounting itself, at this point I will point out that the default port is 22. In my case, however, the port has been changed to 2222. For services such as ssh, I try to change the default ports so as not to get caught by bots and not end up in the database such as shodan.io. The command itself in this case is very simple, but first we need to create a folder:

Let’s try to mount a remote folder:

During mounting, we will be asked if the fingerprint is correct. Then for the system password. The command itself can be disassembled into ‘soban‘ – this is the username. Then ‘soban.pl‘ is the domain name, you can also put the IP address here. The next ‘/home/soban‘ element is the folder that will be mounted. And after the space ‘/home/kali/myremotedir‘ we give the folder where the remote folder should be mounted. If everything went as planned, we can list ‘/home/kali/myremotedir‘ and it should list the contents of the remotely mounted folder ‘/home/soban‘. Let’s list the contents of the ‘/home/kali/myremotedir‘ folder:

Let’s create a remote file:

Now let’s unmount the remote folder and try listing it again:

As expected, the folder is empty and the file we created was created on a remotely mounted drive. After unmounting as you can see the file ‘/home/kali/myremotedir/example‘.

The next step is to create a private key to mount the folder without entering a password. It is very important not to send nikmou your private key. How we can generate and add a public key to a remote server can be read here: “Generate SSH key pair in Linux“.

Now we will try to add an entry to /etc/fstab which will allow automatic mounting on startup of the remote folder system.
To do this, edit the /etc/fstab entry and add this entry:

It is important that all data is correct, in order to verify the parameters, you can use the command for this ‘id‘:

Now we can move on to mounting the resource:

When mounting for the first time, we may be asked to accept and confirm that the fingerprint is correct. After verifying the correctness of mounting the remote resource, we can restart the system. One note here, the system may get up longer.

Generate SSH key pair in Linux

A very convenient way to log into remote systems via ssh is without the use of passwords. Here it is very important not to share your private key with anyone. Currently, when trying to connect, I am asked for the password to the server:

During the connection, we will be asked if the fingerprint is correct. Then enter the user password that is set on the remote server. During the ssh command ‘soban@soban.pl -p2222‘ I gave the username ‘soban‘ then the domain ‘soban.pl‘ and ‘-p2222‘ port ‘2222’. The default port after ssh is 22, but in this case I changed it so that it does not come out on scans – this increases security as often bots / hackers look for port 22, which is the default ssh port set.

Let’s move on to generating the key and copying it to the server:

This is how the key generation looks like, I hit enter for each question:

As a result, a private key was generated: (/home/kali/.ssh/id_rsa) and a public key (/home/kali/.ssh/id_rsa) that we will place on the remote server:

The last time we log in to the server by entering the password. When logging in, we will not be asked for a password now. This way we are able to add our public key (.ssh / authorized_keys) to the remote server.

More security wp-admin in nginx

Some time ago I noticed that my wordpress hacks are being hacked by logging into the backend of the website. A bot or a hacker is trying to do this using a set of passwords. I decided to secure the website’s backend by requiring additional authentication. In nginxe we can set this up by:

We still need to provide the username for authorization and save to the file (/etc/nginx/.htpasswd) as we entered in the nginx configuration file. In “my_user_name”, replace the login of the user with which we will be authorized.:

And the encrypted password has been set by openssl:

Openssl will ask you to come up with a password and enter it twice:

As a result, we will get a file with an encrypted password:

Before reloading nginx, we do a configuration verification:

If everything is set correctly, we should receive the following message:

Now we can restart the service nginx:

The final verification will be to log in to the backend (e.g. www.example-page-wordpress.pl/wp-admin/), as a result, we should be asked for the login and password that we created above:

This is a simple trick to protect your wordpress from bot attacks. However, it should be remembered that we do not share passwords with anyone and setting default usernames and simple passwords is asking for a problem.

Useful tricks to view and search logs

It often happens that we have to catch a given message, e.g. “error” while browsing the logs. Alternatively, we look for the occurrence of a given phrase in the old files. Both “tail” and “grep” are very useful for this. Especially if the logs are set in verbal mode, where there is a lot of messages in the log. We can also exclude certain phrases after parsing the information set. It is enough to use grep properly.

Let’s start by looking at all nginx logs.

In this case, sorting from oldest to newest is very useful as we know where to find the newest log entries:

If we are interested in the latest data, we will focus on the access-soban.pl.log file.

I know that my website is monitored by uptimerobot.com and I would like to find out, for example, from what IP address the website gets a query, e.g. to add it to the firewall as trusted:

As you can see, in this case, the bot that is querying the server has the IP address: 208.115.191.21. If I wanted to see all calls from this IP address, I could view them this way:

If I press (shift + g) I’ll go to the bottom of the log:

It is worth noting that in this case the file in which the query is located is also given.

Now suppose I would like the logs, but without the “uptimerobot“:

This way all queries containing the word “uptimerobots” were cut. We can of course diminish the output from the console more by adding “| grep -v” possibly. Let’s cut out “sitemap“:

One handy thing is to direct the stream from the console output to a file. We do this as follows “/tmp/file.log“:

Additionally, we can pack the file:

After packing the file, we can send it to another person. Sensitive data, such as inquiries or logins, can be cut using grep, as we did above.

Now let’s move on to one of the most useful tools for watching live what happens when someone enters a page:

At this point it is worth noting that we “caught” the logs from the files: “access-soban.pl.log” and “error-soban.pl.log”. However, the “error-soban.pl.log” log is empty, so its content is not shown below. However, if something came up, we would see the contents of the updated file on the console.

Useful at this point is to combine grep and tail. We’re assuming we don’t want uptimerobots to bump into our consoles while observing the logs, so we’re going to cut them like this:

The given examples can be modified in any way. I encourage you to use it in various combinations of tail and grep, especially in situations where erros/warning are repeated. Of course, not only in nginx logs you can use these commands. In all logs where we operate on text, be it system or application. Passing the text mentioned above is very helpful.

Check network connection and open TCP port via netcat

Netcat, also known as “nc,” is a versatile networking tool that is commonly used in Linux and other Unix-like operating systems. It is a command-line utility that can be used for various network-related tasks, such as port scanning, file transfer, and even as a lightweight web server.

The primary function of Netcat is to create network connections between two hosts, allowing data to be transferred between them. It can establish a connection as a client or a server, and it supports both TCP and UDP protocols. This makes it useful for testing network services, troubleshooting network issues, and performing security assessments.

Netcat can be used to scan for open ports on a remote host, allowing system administrators to identify potential security vulnerabilities. It can also be used to transfer files between hosts, similar to the way that the “cp” command works in Linux. Additionally, it can be used to create a simple web server, allowing files to be served over HTTP.

One of the key features of Netcat is its ability to operate in both interactive and non-interactive modes. In interactive mode, it acts like a chat program, allowing users to communicate with each other in real-time. In non-interactive mode, it can be used as a background process that quietly sends or receives data without any user interaction.

Overall, Netcat is a powerful and flexible tool that can be used for a wide range of networking tasks. Its simplicity and ease of use make it a popular choice among system administrators, network engineers, and security professionals.

Sometimes network connections are blocked by various network devices. In the verification of the connection over TCP, we can use, for example, telnet. After all, before we start a server-side service like jboss, we can use a simple utility like netcat to open the port.

In this example we will be using two machines. However, one of them is “host-soban-pl” with the IP address: 10.10.14.100:

The second is “soban-pl” with the IP address: 10.10.11.105:

Below, for example, I will show you how to check an already open tcp connection and one that is closed. On the other side, on port 80, I have an open port with nginx:

Nmap below confirms port opening, additionally identified the service as http:

The conclusion is that the service has network transitions and you can correctly connect over TCP. Now it will try to open a connection that does not exist, e.g. on port 81.

As you can see, the connection is not possible because the port is closed. The assumption is that the port may be open, but for example the firewall blocks it. Then you need to set the appropriate rules on it.

After all, in this case I know that the firewall does not block anything, so it will try to open the port with netcat. First we need to install netcat in debian, it is done like this:

Now let’s move on to running netcat on port 81:

In this case, I specially gave the command ‘&’ at the end to leave the netcat process in the background. At this point, netcat is listening on port 81.

Now we can proceed to checking the correctness of the connection with the use of telnet:

In the meantime, on the server machine, we can use the netstat tool to verify the connection and check from which machine the traffic is coming:

As you can see, a correct connection from the 10.10.14.100 host has been established with the server on 10.10.11.105 on port 81.

To end the call, hit ‘^]‘ (ctrl +]), then type quit and enter.

In this way, we can verify the correctness of the network connection and whether any firewall or other network problem is an obstacle to its correct establishment. Netcat is a very powerful and useful tool, you can use it to transfer files etc. Netstat is also very useful in situations where network congestion occurs and one of the hosts is attacked. It is then easy to notice that a large number of network connections are made.

Netdiscover great tool for scaning and watching local network

Netdiscover is a popular network discovery tool that is used in Linux to identify live hosts on a network. It sends ARP (Address Resolution Protocol) requests to the network and then listens for replies from active hosts. By analyzing the replies, Netdiscover can build a list of all hosts that are currently active on the network.

Netdiscover is typically used by network administrators to identify all devices on a network and to detect any unauthorized devices that may be connected. It can also be used to identify the IP address of a device on a network that is not responding to conventional network scanning techniques.

Netdiscover is a command-line tool and has a range of options that allow it to be customized for specific network environments. For example, it can be set to scan a particular subnet or to use a specific network interface. Additionally, Netdiscover can output its results in a range of formats, including CSV and XML, making it easy to integrate with other tools and applications.

Overall, Netdiscover is a useful tool for network administrators who need to identify all devices on a network and detect any unauthorized devices that may be connected. Its ability to output results in a range of formats and its customizable options make it a versatile and valuable addition to any network security toolkit.

Netdiscover is a great tool to scan your local network for locally attached devices. It is installed by default in Kali Linux. However, if you want to use it on a raspberry pi, you need to install it. You can do this as follows:

In virtualbox I have this setup of network in Kali Linux:

The very use of the tool requires specifying the subnetwork in which we are located. We can check it like this:

In this case, we can scan network 192.168.1.0/24, so in netdiscover we can use:

The screen will show the network scanner:

Netdiscover also gives you the option to direct the result to a file, in this case it refreshes the scan every 2 seconds:

Now we can also use nslookup to get hostname:

Also we can use nmap:

You can use more parameters in nmap for more information, however this will significantly increase the scan time. Still, sometimes it’s worth the wait.

Domain list get IP adresses

In this case, I’ll show you how to get IP from domains. We will save the domains in the file, then, after calling the command, we will get a list of IP addresses along with the names of the domains from which we want to get the IP address.

Command that we will use is “host“:

I placed some domains, for example:

After calling the command:

It’s worth noting that in some cases there are more IPs – like netflix.com. This is because traffic is spread across different servers.

If you want to get only IP addresses without domain names, remove “echo” from the command:

The result of the command is:

Random generator IP adress in bash

re situations where we can benefit from generating arbitrary IP addresses in bash. In this case, the first octet situations where There are situations where we can benefit from generating arbitrary IP addresses in bash. In this case, the first octet situations where we can benefit from generating arbitrary IP addresses in bash. In this case, the first octet in bash. In this case, the first octet situations where we can benefit from generating arbitrary IP addresses in bash. In this case (224.*.*.* / 10.*.*.* / 127.*.*.* / 0.*.*.* / 192.168.*.* / 172.16.*.* / 172.31.*.*) is not generated. Of course, the script can be adapted to your needs.

You can also download this script:

You should give the script the ability to run it:

Here’s the effect when you run it:

Long history in Linux date of execute command and some tricks

In my opinion, one of the most important things about Linux is history. Thanks to it, we know what has been done in the system and we can quickly check what commands were executed in the system. When working on different systems it is very useful to use grep and use (ctrl + r) in the shell to quickly search the command history. However, to make it even more useful, we will try to enlarge the history to 10000 by default, it is 1000 command lines. We will also add a date that will be next to the command issued. If we want to run the history command, just:

To enlarge the history and add a date when executing a given command, you should:

The whole thing can be reduced to one command.

After re-logging, it looks like this:

Finally, I would like to introduce a few more tricks that I mentioned, e.g. greping history:

Of course, you can use any other command instead of ‘cp’.

Mentioned useful way to search history is to use (ctrl + r). After pressing this combination, we can start writing any command. History will be searched. If we hold ctrl again and press kolena once ‘r‘ we will jump to the command above from the bottom. In my case, as you can see, this is the second command from the bottom, that is:

If you are interested in where the history is saved and in what form, you can view it or delete it from the file:

If we accidentally use the wrong command on the system, it makes sense to remove it.