Categories
Linux Networking

use SSH agent sockets with tmux

Tmux is a nice helper for having multiple parallel shells running, especially using SSH connections. Additionally your shells remain running after connection lost. But if you also using SSH agents for using your SSH keys also on other servers, you have a problem with reconnecting to such tmux sessions.

If you reconnect, the unix socket to your SSH agent will change and because the shell is reused respectively reattached, your SSH agent will no operate as expected.

So you have to change the environment variable after reconnect to a new value. This is quite not easy, especially for already running programs. So it is easier to symlink your current unix socket to a specific file and update only the symlink on reconnect.

To achieve this, you should add the following code to your .bashrc (or whatever shell you are using).

if [[ ! -z "$SSH_AUTH_SOCK" ]] ; then
    if [[ -S "$SSH_AUTH_SOCK" && $(basename "$SSH_AUTH_SOCK") != "localauthsock" ]] ; then
        ln -sf "$SSH_AUTH_SOCK" ~/.ssh/localauthsock
    fi
fi
export SSH_AUTH_SOCK=$HOME/.ssh/localauthsock
Categories
Linux Networking

partial use of DNS in OpenVPN split-tunnel in Ubuntu

In Ubuntu 18.04 LTS my network configuration is setup by NetworkManager and DNS is provided by systemd. If you use a split-tunnel, which means you didn’t route all your traffic through the VPN connection, the DNS server announced by the VPN server will not be used in any situation.

To solve this issue, you can use the script update-systemd-resolved to automatically correct the DNS settings after OpenVPN connection.

As I wrote, NetworkManager didn’t support all OpenVPN options, you have to use openvpn directly and not via NetworkManager to use this solution.

Installation

First you have to save the script to your disk. I saved it in path /etc/openvpn/scripts/update-systemd-resolved.

Configuration

Then you have to modify your OpenVPN profile and add the following lines to the end:

dhcp-option DOMAIN-ROUTE myvpndomain.de.
script-security 2
setenv PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
up /etc/openvpn/scripts/update-systemd-resolved
up-restart
down /etc/openvpn/scripts/update-systemd-resolved
down-pre

This will run the update script after connection setup and before tear down. Additionally it will mark all DNS queries to myvpndomain.de to use the DNS server provided by the VPN tunnel and not the already defined DNS server.

To check if it is successful, you can run:

systemd-resolve --status

And the output should contain something like:

Link 34 (tun0)
      Current Scopes: DNS
       LLMNR setting: yes
MulticastDNS setting: no
      DNSSEC setting: no
    DNSSEC supported: no
         DNS Servers: 10.10.20.1
          DNS Domain: ~myvpndomain.de

Link 3 (wlp4s0)
      Current Scopes: DNS
       LLMNR setting: yes
MulticastDNS setting: no
      DNSSEC setting: no
    DNSSEC supported: no
         DNS Servers: 192.168.1.1
          DNS Domain: ~.
Categories
Linux Networking

import OpenVPN connection into Linux NetworkManager

The most desktop oriented linux distributions uses NetworkManager for configuring the network interfaces. NetworkManager also supports VPN connections and so there is also a plugin for OpenVPN. To use it, you can use NetworkManager UI and setup your VPN connection or you can import a .ovpn file, also via console.

sudo nmcli connection import type openvpn file /home/frank/myconnectionprofile.ovpn

NetworkManager will parse the ovpn file and extract all known settings and convert it to an NetworkManager VPN profile. Unfortunately NetworkManager doesn’t support every OpenVPN directive, so it may not work. If this is the case you can only use openvpn directly to connect to the VPN.

openvpn myconnectionprofile.ovpn
Categories
Linux Networking

hash known_hosts in Linux

When using SSH to connect to other hosts a file containing the accepted public keys is saved in your home directory, especially ~/.ssh/known_hosts. This file contains beside the public key the IP / Hostname of the connected servers. These informations can be critical, if any other program or user will read the known_hosts file. One way to protect these informations is to hash the IP / Hostname part of the file.

To activate SSH to do this, is to add the following config entry into your ssh config. If you cannot add it system wide you can use your local ssh config file: ~/.ssh/config

HashKnownHosts yes

You can use the following command to achieve this.

echo "HashKnownHosts yes" >> ~/.ssh/config

Now SSH client will hash newly generated entries automatically. To update all given entries you should run:

ssh-keygen -H -f ~/.ssh/known_hosts

Then check if the conversion was successful and after that delete the old file:

rm ~/.ssh/known_hosts.old

Categories
Linux Networking

Optimize CIFS mounts for slow connections in Linux

With the default settings on mounting CIFS I had the problem while uploading a large file (200 – 300 MB) that my system frozen for some seconds. Additionally progress bars in Midnight Commander for example filled up to 100 % in some milliseconds and then it took 5 till 10 minutes until the copy job was finished.

The reason for this was that the dirty block cache of the Linux Kernel was filled by the copy job and sometimes was also full and could only cleared slowly, because of the slow connection to the server.

You can watch the current state of the dirty_cache with:

watch -n 2 "cat /proc/meminfo | egrep -i 'dirty|write'"

Also you can adjust the dirty_background_bytes and dirty_bytes of the kernel, but these settings are system wide and not only for a mountpoint. So you need more cache for local storage to get things fast and small cache for slow remote operations. But you should check the following settings of your kernel:

  • vm.dirty_background_bytes
  • vm.dirty_bytes
  • vm.dirty_expire_centisecs
  • vm.dirty_writeback_centisecs

There are many different “best values” found on the internet, but it depends. So I will not add more here.

Sometimes I also get some errors (Error 4 and Error 5), I assume due to timeouts or thatever, after the copy job was finished. Also in kernellog the following entries was reported:

CIFS VFS: No writable handles for inode<br>CIFS VFS: cifs_invalidate_mapping: could not invalidate inode

To solve the problem with the cifs mount, one solution is to disable the cache of this mount. You can do it with the mount option cache=none of mount.cifs.

The upload is a little bit slower, but it is stable now.

Categories
Networking

Really caching web content

Today I tried to optimize a web application to speed up the usage. One big and low hanging fruit is the correct caching of static files like images, css- or javascript-files. There are two parts you can save with caching. And only the correct usage will really save time and bandwidth.

1. Caching content with live time revalidation

Every request of a http client consist of two parts; the request and the response. In the request defines the browser, which resource is needed. The response of the server then is the requested resource. One caching-strategy is to slim the response. So you will save bandwidth and on slow connections also time. To achieve this you have to use the ETag and / or Last-Modified header of the HTTP protocol.

Response

If the browser then needs the requested resource again, it will send the If-None-Match and / or If-Modified-Since request header. Then the server can decide, if the resource has changed or not. If not, it will send the 304 response result. But what if we know on the first request, that the content is safe for the next x minutes / days? In this case we could also save the request. Imaging you have 100 pictures on one site and have a ping-time of 100 ms to a server. It would take in sequential mode 10 seconds to check these URLs.

2. Caching content with expiration dates

To give your content a time range, where it is valid, you have to define an expiration date using the Expires header. Additional you should enable caching itself for a time range using the Cache-Control header. The Cache-Control header can have several values, which can be combined. Typically would be:

“public,max-age=1800,must-revalidate”

The last option defines, that the client have to re-request the server for the resource after the time “max-age”, if the resource is needed again. Unfortunately the Safari browser have a bug which results in ignoring the Expires and Cache-Control header under some circumstances. As Steve Clay wrote on his blog, the problem belongs to the definition of must-revalidate. So currently using must-revalidate is no good idea until the bug is resolved.

To easily find the resources with missing Expires headers, you can use YSlow, a Firefox plugin provided by Yahoo.

YSlow