Connecting to Checkpoint QVPN SXN in Linux

Prerequisites

Ensure you have received their E-Mail and following information:

  • VPN Certificate file (.p12)
  • Your VPN password
  • Your server username

Please use that information to replace placeholders in scripts found in this tutorial.

Installation script

You can either download from their website (crappy and frustrating) or get it directly via http://gateway-ip.

Look for a file called snx_install_linux**.sh

wget http://gateway-ip/**/snx_install_linux**.sh

Security: We have a look what is distributed and how running it will affect our system

$ cat snx_install_linux30-7075.sh | sed -e 's/^.*\(\x42\x5A.*\)/\1/g' >| tar -jtvf
-rwxr-xr-x builder/fw 3302196 2012-12-06 14:02 snx
-r--r--r-- builder/fw 747 2012-12-06 14:02 snx_uninstall.sh

Installation

$ sudo chmod +x snx_install_linux30-7075.sh
$ sudo ./snx_install_linux30-7075.sh

You may have some libraries missing since the client is still 32bit.

$ sudo ldd /usr/bin/snx | grep "not found"
libpam.so.0 => not found
libstdc++.so.5 => not found

So, here we would need some legacy architecture

$ sudo apt-get install libx11-6:i386 libstdc++5:i386 libpam0g:i386

Connect to VPN

$ snx -c path-to-key/rl_johnbarleycorn.p12 -g -s company.inetservices.com companyvpn
Check Point's Linux SNX
build 800007075
Please enter the certificate's password:
SNX authentication:
Please confirm the connection to gateway: companyvpn VPN Certificate
Root CA fingerprint: MELT ELSE FUN BLUE ONUS GORE GAD SWAM VAST CHAT YAWL FOUR
Do you accept? [y]es/[N]o:
y
SNX - connected.
Session parameters:
===================
Office Mode IP : 172.16.10.145
Timeout : 12 hours</username>

(exit code 0)

Debugging

$ ssh -vvv vq
debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password
[…]

Check what it did setup

$ ifconfig | grep -A 8 tunsnx
tunsnx: flags=4305<up,pointopoint,running,noarp,multicast> mtu 1500
inet 172.16.10.145 netmask 255.255.255.255 destination 172.16.10.144
inet6 fe80::ed2a:98f2:a47:8555 prefixlen 64 scopeid 0x20                    <link>
 unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 100 (UNSPEC)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 25 bytes 2252 (2.2 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0</up,pointopoint,running,noarp,multicast>

And for the routes:

$ routes | grep tunsnx                                           :(
Kernel-IP-Routentabelle
Ziel            Router          Genmask         Flags Metric Ref    Use Iface
10.7.5.0        0.0.0.0         255.255.255.0   U     0      0        0 tunsnx
10.7.6.0        0.0.0.0         255.255.255.0   U     0      0        0 tunsnx
10.8.4.0        0.0.0.0         255.255.254.0   U     0      0        0 tunsnx
10.8.6.0        0.0.0.0         255.255.255.0   U     0      0        0 tunsnx
10.14.14.15     0.0.0.0         255.255.255.255 UH    0      0        0 tunsnx
10.14.14.15     0.0.0.0         255.255.255.255 UH    2      0        0 tunsnx
10.200.1.12     0.0.0.0         255.255.255.255 UH    0      0        0 tunsnx
10.200.1.12     0.0.0.0         255.255.255.255 UH    2      0        0 tunsnx
10.200.13.0     0.0.0.0         255.255.255.0   U     0      0        0 tunsnx
10.200.13.0     0.0.0.0         255.255.255.0   U     2      0        0 tunsnx
10.200.14.0     0.0.0.0         255.255.255.0   U     0      0        0 tunsnx
10.200.14.0     0.0.0.0         255.255.255.0   U     2      0        0 tunsnx
10.200.28.9     0.0.0.0         255.255.255.255 UH    0      0        0 tunsnx
10.200.28.9     0.0.0.0         255.255.255.255 UH    2      0        0 tunsnx
10.200.29.0     0.0.0.0         255.255.255.0   U     0      0        0 tunsnx
10.200.29.0     0.0.0.0         255.255.255.0   U     2      0        0 tunsnx
172.16.10.68    0.0.0.0         255.255.255.255 UH    0      0        0 tunsnx

Automating connection

./snx-vpn-up:

#!/bin/bash

# trap ctrl-c and call ctrl_c()
trap ctrl_c INT

function ctrl_c() {
  snx -d
}

showroutes() {
  echo Routes:
  echo =======
  ip route | grep tunsnx
  if [ "$?" -ne 0 ]; then
    echo "Something failed. No routes? Try again."
    echo
    snx-vpn-down
    exit 1
  fi
}

ROUTES=$( ip route | grep tunsnx )
if [ ! -z "$ROUTES" ]; then
   echo "Already connected."
   echo
   showroutes
   exit 1
fi

echo "SNX - Connecting..."
echo 'PASSWORD' | snx -g -c path-to-key/rl_johnbarleycorn.p12  -s IP
sleep 1
showroutes
sleep 1
echo
echo /home/$( whoami )/snx.elg
echo =====
tail -n 1000 -f /home/$( whoami )/snx.elg

If this stops working at any point in future use expect

./snx-vpn-down:

#!/bin/bash
if [ -z "$( pgrep snx)" ]; then
  echo "SNX was not running."
  exit 1
fi

snx -d

Trackback

GitLab: checkout all available repositories

Generate a private token

https://<GITLAB-SERVER1>/profile/personal_access_tokens
https://<GITLAB-SERVER2>/profile/personal_access_tokens

Checkout a list of all available repositories

QUERY='.[] | .path_with_namespace + "\t" + .ssh_url_to_repo' # JQ Query
curl --request GET --header "PRIVATE-TOKEN: <PRIVATE-TOKEN>" "<GITLAB-SERVER1>/api/v4/projects?simple=true&per_page=65536" | jq -r $QUERY > repo.list
curl --request GET --header "PRIVATE-TOKEN: <PRIVATE-TOKEN>"" "<GITLAB-SERVER2>/api/v3/projects?simple=true&per_page=65536" | jq -r $QUERY >> repo.list

Create directories for repositories

cat repo.list | cut -f1 | xargs mkdir p-

Checkout projects (with GNU parallel)

parallel --colsep '\t' --jobs 4 -a repo.list git clone {2} {1}

Build list of git repositories

find -type d -name ".git"  | xargs realpath | xargs dirname > path.list  

Report repository branch or checkout branch

cat path.list | xargs -I{} sh -c "cd {}; echo {}; git branch"
cat path.list | xargs -I{} sh -c "cd {}; echo {}; git checkout master"
cat path.list | xargs -I{} sh -c "cd {}; echo {}; git checkout develop"

Note: when you are migrating repositoires you should use git clone --mirror

Update: try adding get all available repositories. if you don’t get all projects and just get 404 you’re fucked. Try creating the list from what you see browsing GitLab or try to get Admin-Access.

Infojunk August 2018

Linux

It’s about responsiveness – not about best performance!

Apache

Hardware

Coding

Python

Yes, deep dive into Python. And I don’t like it. As well as PHP. Do Rust. Trust me!

Math

AutoFS: Indirect User-Automounter for AWS S3 using either s3fs or goofyfs

I recently discovered the benefits with autofs and struggled with some issues on mounting S3 buckets. I didn’t find anything similar so I wrote auto.s3 which is now capable of using FUSE s3fs and goofyfs.

auto.s3 uses AWS CLI and jq to resolve a user-mountspace to /home/<user>/Remote/S3/<aws-profile>/<bucket>/** using correct file and directory permissions.

The scripts currently run on my Ubuntu Bionic Beaver but it should be possible to use it on other distributions without minimal work. For OSX – nah… pay me!

Please read the comments included in the files!

/etc/auto.s3

#!/bin/bash

##
# AutoFS user-folder indirect Automounter for S3 using either FUSE goofyfs or s3fs (0.1)
# 
# ----------------------------------------------------------------------------
# "THE FUCK-WARE LICENSE" (Revision 1):
# <mg@evolution515.net> wrote this file. As long as you retain this notice you
# can do whatever you want with this stuff. If we meet some day, and you think
# this stuff is worth it, you can buy me a beer in return or have sex with me, 
# or both.
# ----------------------------------------------------------------------------
#
# Requirements
# - AWS CLI installed
# - JQ installed
# - Either FUSE goofyfs or s3fs installed
#
# Usage
#  - place config to $S3FS_CONFIG directory using s3fs config format (ACCESS_KEY:ACCESS_SECRET)
#  - place this file to /etc/auto.s3 and make it executable
#  - add to /etc/auto.master: /home/<user>/Remote/S3 /etc/auto.s3 --timeout=3000
#  - choose backend by config section in this file (NOTE: goofyfs needs )
#  - cd <mountpoint>/<aws-profile>/<bucket>
# 
# Debugging
# - Stop system service by: 
#   systemctl stop autofs
# - Execute as process (use --debug to see mount commands)
#   automount -f -v 
#
# Clean up mountpoints (when autofs hangs or mountpoints are still used)
#   mount | grep autofs | cut -d' ' -f 3 | xargs umount -l 
#    
# Logging
# - Logs go to syslog except you are running automount within TTY
# 
# Notes
# - goofyfs makes sometimes trouble - use s3fs!
# - Daemon needs to run by root since we only root has access to all mount options
# - Additional entries can be defined with the -Dvariable=Value map-option to automount(8).
# - Alternative fuse style mount can be done by -fstype=fuse,allow_other :sshfs\#user@example.com\:/path/to/mount
# - We do not read out .aws/config since not all credentials do necessary have S3-access
# - https://github.com/kahing/goofys/pull/91/commits/07dffdbda4ff7fc3c538cb07e58ad12cc464b628
# - goofyfs catfs cache is not activated by default
# - chown/chmod is not that nyce but works ;9
# - other backends not planned at the moment
#
# AWS Commands
# - aws s3api list-buckets
# - aws s3api list-objects --bucket <bucket>
#
# FAQ
# -  https://github.com/s3fs-fuse/s3fs-fuse/wiki/FAQ
#
# Autofs provides additional variables that are set based on the user requesting the mount:
# 
#   USER   The user login name
#   UID    The user login ID
#   GROUP  The user group name
#   GID    The user group ID
#   HOME   The user home directory
#   HOST   Hostname (uname -n)
#
# From exports
#
#   AUTOFS_GID="1000"
#   AUTOFS_GROUP="ctang"
#   AUTOFS_HOME="/home/ctang"
#   AUTOFS_SHOST="refpad-16"
#   AUTOFS_UID="1000"
#   AUTOFS_USER="ctang"
#

# Strict mode
set -euo pipefail -o errtrace

# Config
S3FS_CONFIG="${AUTOFS_HOME:-$HOME}/.autofs/s3fs" # user directory
BACKEND="goofyfs" # s3fs|goofyfs - not goofyfs requires goofyfs-fuse!
DEBUG=0 # 0|1 where 1 is on - output will go to syslog or journald
UMASK="750" # Umask for mountpoint placeholder directories
OPTS="defaults,noatime" # mount options
if [[ -z "${GID:-}" ]]; then
    GID="$(id -g)"
fi

# We ensure every command output can be parsed in neutral form
export LC_ALL=C
export AWS_SDK_LOAD_CONFIG=0

# Const
PWD="$(pwd)"
SCRIPT_NAME=`basename "$0"`
LOGGER_CMD="logger -i -t ${SCRIPT_NAME}"
if test -t 1; then 
    # if tty
    LOGGER_CMD="${LOGGER_CMD}  --no-act --stderr"
fi
PROFILES=()

if ! which jq 1>/dev/null 2>&1; then
     $LOGGER_CMD "Cannot find jq binary"
     exit 1
fi

if ! which aws 1>/dev/null 2>&1; then
     $LOGGER_CMD "Cannot find aws binary"
     exit 1
fi

# If use is already in a mount point this script will be called by root
# so we need to remap some stuff
if [[ ! "${HOME:-}" == "${PWD}/"* ]] && [[ "${PWD}" =~ ^(/home/[^/]+) ]]; then
    S3FS_CONFIG=${S3FS_CONFIG/${AUTOFS_HOME:-$HOME}/${BASH_REMATCH[1]}}
    HOME="${BASH_REMATCH[1]}"
    USER="${HOME##*/}"
    AUTOFS_UID="$(id -u ${USER})"
    AUTOFS_GID="$(id -g ${USER})"
    $LOGGER_CMD "Initializing. Remapping home to ${HOME}, user=${USER}, config=${S3FS_CONFIG}"
fi

# Prevent errors
if [[ ! -d ${S3FS_CONFIG} ]]; then
     $LOGGER_CMD "Config directory ${S3FS_CONFIG} not found."
     exit 1
fi

# Mountpoint needs to be owned by user 
chown -R ${AUTOFS_UID:-$UID}:${AUTOFS_GID:-$GID} "${S3FS_CONFIG}"
chmod -R 700 "${S3FS_CONFIG}"

# Create indirect mount points for s3 profiles
PROFILES=($(ls -1 ${S3FS_CONFIG}))
if [[ -z "${PROFILES[*]}" ]]; then
    $LOGGER_CMD "No profiles found within ${S3FS_CONFIG}"
else
    for profile in "${PROFILES[@]}"; do
        chmod 600 ${S3FS_CONFIG}/${profile}
        if [[ ! -d "${PWD}/${profile}" ]]; then
            $LOGGER_CMD "Creating ${PWD}/${profile}"
            mkdir -p "${PWD}/${profile}"  || true > /dev/null
            chmod ${UMASK} "${PWD}/${profile}"
            chown ${AUTOFS_UID:-$UID}:${AUTOFS_GID:-$GID} "${PWD}/${profile}"
        fi
    done
fi

# Requested profile
PROFILE="${1:-}"
if [[ ! -e "${S3FS_CONFIG}/${PROFILE}" ]]; then
    $LOGGER_CMD "No valid profile=${PROFILE} given! "
    exit 1
fi
$LOGGER_CMD "Profile: $@"
if [[ -z "${PROFILE}" ]]; then
    $LOGGER_CMD "No profile given" 
    exit 1
fi

if [[ "${BACKEND}" == "s3fs" ]]; then
    if ! which s3fs 1>/dev/null 2>&1; then
        $LOGGER_CMD "Cannot find s3fs installation"
        exit 1
    fi
    OPTS="-fstype=fuse.s3fs,uid=${AUTOFS_UID:-${UID}},gid=${AUTOFS_UID:-${GID}},umask=000,${OPTS},_netdev,allow_other,default_permissions,passwd_file=${S3FS_CONFIG}/${PROFILE},use_cache=$(mktemp -d)"
    if [[ "$DEBUG" -eq 1 ]]; then
        OPTS="${OPTS},dbglevel=info,curldbg"
    fi
elif [[ "${BACKEND}" == "goofyfs" ]]; then
    if ! which s3fs 1>/dev/null 2>&1; then
        $LOGGER_CMD "Cannot find goofyfs installation"
        exit 1
    fi
    OPTS="-fstype=fuse.goofyfs-fuse,${OPTS},_netdev,nonempty,allow_other,passwd_file=${S3FS_CONFIG}/${PROFILE},--file-mode=0666,nls=utf8"
    if [[ "${DEBUG}" -eq 1 ]]; then
        OPTS="${OPTS},--debug_s3,--debug_fuse"
    fi
else
    $LOGGER_CMD "Unsupported backend ${BACKEND}"
    exit 1
fi

read  -r -d '' CREDENTIALS < ${S3FS_CONFIG}/${PROFILE}
export AWS_ACCESS_KEY_ID="${CREDENTIALS%%:*}"
export AWS_SECRET_ACCESS_KEY="${CREDENTIALS##*:}"
BUCKETS=($(aws s3api list-buckets --output json | jq -r '.Buckets[].Name'))
printf "%s\n" "${BUCKETS[@]}" | awk -v "opts=${OPTS}" -F '|' -- '
    BEGIN { ORS=""; first=1 }
    {
          if (first)
            print opts; first=0
          bucket = $1
          # Enclose mount dir and location in quotes
          # Double quote "$" in location as it is special
          gsub(/\$$/, "\\$", loc);
          gsub(/\&/,"\\\\&",loc)
          print " \\\n\t \"/" bucket "\"", "\":" bucket "\""
          # print " \\\n\t " bucket, ":"bucket
        }
    END { if (!first) print "\n"; else exit 1 }
    '

/usr/local/bin/goofyfs-fuse

#!/bin/bash

## 
# GoofyFS - FUSE wrapper
#
# supports passwd_file argument analogue to s3fs since goofyfs does not support custom credentials file
# (see https://github.com/kahing/goofys/pull/91/commits/07dffdbda4ff7fc3c538cb07e58ad12cc464b628)
#
# usage: mount \
#   -t fuse.goofyfs-fuse -o allow_other,--passwd_file=/home/ctang/.autofs/s3fs/<profile>,--file-mode=0666,nls=utf8,--debug_s3,--debug_fuse \
#   <bucket> \
#   <mountpoint>

# Strict mode
set -euo pipefail -o errtrace

ARGS=()
PASSWD_FILE=""
BUCKET=""
MOUNTPOINT=""

while (($#)); do
    case "$1" in
        -t)
            shift 1
            ARGS+=("-t" "fuse.goofyfs") 
            ;;
        -o)
            shift 1
            OPTS=()
            for arg in  $( echo "${1}" | tr ',' '\n' ); do
                case "${arg}" in
                    --passwd_file=*)
                        PASSWD_FILE="${arg##*=}"
                        ;;
                    --*)
                        ARGS+=($( echo "${arg}" | tr '=' '\n' ))
                        ;;
                    *)
                        OPTS+=("${arg}")
                esac
            done
            opts="$(printf "%b," ${OPTS[@]})"
            ARGS+=("-o" "${opts%,}")
            ;;
        *)
            if [[ -z "${BUCKET}" ]]; then
                BUCKET="$1"
            elif [[ -z "${MOUNTPOINT}" ]]; then
                MOUNTPOINT="$1"
            else
                ARGS+=("$1")
            fi
            ;;
    esac
    shift 1
done

ARGS+=("${BUCKET}" "${MOUNTPOINT}")

export AWS_SDK_LOAD_CONFIG=0
if [[ -n "${PASSWD_FILE}" ]]; then
    read  -r -d '' CREDENTIALS < ${PASSWD_FILE}
    export AWS_ACCESS_KEY_ID="${CREDENTIALS%%:*}"
    export AWS_SECRET_ACCESS_KEY="${CREDENTIALS##*:}"
fi

goofyfs ${ARGS[@]} 

Decoding MySQL ~/.mylogin.cnf

Little tool to decode MySQL’s badly secured login-path. It does the same like the official MySQL server tools “my_print_defaults” (based on MySQL OSS python libs).

More security can be achieved by: https://www.percona.com/blog/2016/10/12/encrypt-defaults-file/

#!/usr/bin/env php
<!--?php

$fp = fopen(getenv('HOME') . '/.mylogin.cnf', "r");
if (!$fp) {
    die("Cannot open .mylogin.cnf");
}

fseek($fp, 4);
$key = fread($fp, 20);

// generate real key
$rkey = "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0";
for ($i = 0; $i < strlen($key); $i++) {
    $rkey[$i % 16] = ($rkey[$i % 16] ^ $key[$i]);
}

$section = null;
$settings = [];

while ($len = fread($fp, 4)) {

    // as integer
    $len = unpack("V", $len);
    $len = $len[1];

    // decrypt
    $crypt = fread($fp, $len);
    $plain = openssl_decrypt($crypt, 'aes-128-ecb', $rkey, true);
    $decoded = preg_replace("/[^\\x32-\\xFFFF]/", "", $plain);
    if (preg_match('/^\[([^\]]+)]/', $decoded, $matches)) {
        $section = $matches[1];
        $settings[$section] = [];
    } elseif (preg_match('/^(\w+)=(.*)/', $decoded, $matches)) {
        $settings[$section][$matches[1]] = $matches[2];
    }
}
fclose($fp);

echo json_encode($settings, JSON_PRETTY_PRINT);

Source: https://gist.github.com/robocoder/024442d06b8a75d292d58c5884be4642

Source of mycli (Python): _mylogin_cnf of mycli

Albert Launcher 0.14: Switch Application Window Plugin

Since I really don’t like the Switcher Plugin for GNOME and I’m stick to Albert Launcher I added this extension. Maybe it will get accepted for their the python extensions.

Drop to ~/.local/share/albert/org.albert.extension.python/switch-app-window.py or other locations provided. And activate within extensions:

import re
import subprocess

from albertv0 import *

__iid__ = "PythonInterface/v0.1"
__prettyname__ = "Switch App Window"
__version__ = "1.0"
__trigger__ = "w "
__author__ = "Markus Geiger <mg@evolution515.net>"
__id__ = "window"
__dependencies__ = []

iconPath = iconLookup("go-next")

def handleQuery(query):
    stripped = query.string.strip()
    if not query.isTriggered and not stripped:
        return

    results = []
    process = subprocess.Popen(['wmctrl', '-l'], stdout=subprocess.PIPE, encoding='utf8')

    output, error = process.communicate()

    patt = re.compile(r'^(\w+)\s+(\d+)\s+([^\s]+)\s+(.+)$')
    window_re = re.compile(r'^(.+)\s+-\s+(.+)$')

    for line in output.split('\n'):
        match = patt.match(line)
        if not match:
            continue

        window_id = match.group(1)
        fulltitle = match.group(4)
        if not query.string.lower() in fulltitle.lower():
            continue

        titlematch = window_re.match(fulltitle)

        if titlematch:
            windowtitle = titlematch.group(1)
            program_title = titlematch.group(2)
        else:
            program_title = fulltitle
            windowtitle = fulltitle

        results.append(
            Item(
                id="%s_%s" % (__id__, window_id),
                icon=iconPath,
                text=program_title,
                subtext=windowtitle,
                completion=query.rawString,
                actions=[
                    ProcAction("Focus", ["wmctrl", "-ia", window_id]),
                    ProcAction("Close", ["wmctrl", "-ic", window_id])
                ]
            )
        )
    return results

git: encrypt credentials within repository

Especially in the microservices era you should use a config server and never store your credentials in repository!

You should not use git smudge/clean filters for encryption. Why? Example!

Let’s create a example repository

% TMP=$(mktemp -d)
% cd $TMP
% git init
% echo 'Hello world!' > credentials

Add .gitattributes

/credentials filter=crypto

Add .git/config

[filter "crypto"]
smudge = openssl enc -aes-256-cbc -salt 
clean = openssl enc -aes-256-cbc -salt 
require

Note: require indicates that these commands need exit code 0 otherwise it could happen that these files are added though without any encryption. You can test this by using smudge = gpg -d -q –batch –no-tty -r <SIGNATURE> and clean = gpg -ea -q –batch -no-tty -r <SIGNATURE> filters.

Add file to to stage and commit

% git add .gitattributes credentials
% git add credentials
enter aes-256-cbc encryption password:
Verifying - enter aes-256-cbc encryption password:
% git status
Changes to be committed:
    new file:   .gitattributes
    new file:   credentials
% git commit -m "Inital commit"                                                               [master (Basis-Commit) 860cedc] Initial commit
 2 files changed, 2 insertions(+)
 create mode 100644 .gitattributes
 create mode 100644 credentials

Note: The .git/config for filters will not be added to the commit!

Clone the example repository

% TMP2=$(mktemp -d)
% git clone $TMP .
% cat credentials
Salted__��:�.��1٩�H��8��[��25%% 

We now have an encrypted file without our repository. To decode it we need the clean command. On the other side anyone still can commit this file and break configuration.

Thoughts

  • Yes, you can automate credentials encoding or decoding with filters.
  • But you could also encode or decode manualy and check-in or -out. This makes not much difference!
  • GitLab provides file locks. This probably makes sense as long as you use GitLab premium. Otherwise you might use git commit hooks.
  • If you wanna use git repos to store credentuals and have them fully crypted you may have a look at [Git Annnex GCrypt](http://git-annex.branchable.com/tips/fully_encrypted_git_repositories_with_gcrypt/. Yet this is not based on git but a own project written in Haskell! It just uses the git-format!
  • You’re doomed if you store your production credentials in clear text so every developer can read it.
  • Credentials should never be stored in repositories!
  • Use config servers and tools like consul for creating configs or directly pull the configs from the servers!

So, why all that circumstances?! This easily can get messed up. This is just a badly designed »solution« . Therefore I would call it only a »workaround« and no real solution!

Docker on Windows: CIFS v3.02 mounts failing with big file count

Oh, well I love Docker and Windows – NOT! Another issue: https://github.com/docker/for-win/issues/2285

Description

We use containers for our developer environments. The projects are built with tools shipped within the containers. While that works like a charm for OSX and Linux, we face problems with some containers on Windows which have a heavy load of source files. The project directory is mounted to the container by CIFS 3.02.

As workaround we have found out, that builds do work with CIFS 2.0, but not with 2.1 or 3.02 mounts. If the build is failing it complains about “File not found”. Yet the file is there and can be read!

I am certain the bug did not occur with Docker for Windows 1.13.0 (2017-01-19), yet already 1.12.1 (2016-09-16) introduced CIFS version 3.02. Maybe it is CIFS kernel module related or relies on communciation with windows CIFS server? I guess LinuxKit v0.4 or Microsoft’s Red Stone update introduced the problem.

Expected behavior

On long running builds files are accessible and and build does not fail since a file-not-found or similar is encountered.

Actual behavior

Build fails since it encounters ‘file-not-found-error’. Yet, the file can be read in the terminal by cat or other tools.

Information

Microsoft Windows [Version 10.0.17134.165]
Version 18.06.0-ce-win70 (19075)
Channel: stable
501df65

Running Linux Containers only.

Steps to reproduce the behavior

0. Setup

I tried following test setup. Please replace CIFS vers=2.0 with 3.02 or 2.1 and you’ll see it fails. These commands are run by Windows WSL Ubuntu18.04 with DOCKER_HOST=tcp://localhost:2375.

dockerfile+faillog.zip

1. Build test image

docker build -t dockerbug .

2. Create test volumes with CIFS 2.0 and 3.02 protocol.

Please adjust host, username, password and domain to your Windows environment so the shares can be mounted.

The problem is NOT symlink related. I also tried switching mount options like mfsymlinks and so on.

docker volume create \
   --driver local \
   --opt type=cifs \
   --opt device=//10.0.75.1/C \
   --opt o=username=Markus,password=XXXXXXXX,relatime,vers=3.02,sec=ntlmsspi,cache=strict,username=Markus,domain=REFDESK-01,uid=0,noforceuid,gid=0,noforcegid,addr=10.0.75.1,file_mode=0777,dir_mode=0777,iocharset=utf8,nounix,serverino,mapposix,mfsymlinks,noperm,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1,nobrl \
   testcifs3.02
docker volume create \
   --driver local \
   --opt type=cifs \
   --opt device=//10.0.75.1/C \
   --opt o=username=Markus,password=XXXXXXXX,relatime,vers=2.0,sec=ntlmsspi,cache=strict,username=Markus,domain=REFDESK-01,uid=0,noforceuid,gid=0,noforcegid,addr=10.0.75.1,file_mode=0777,dir_mode=0777,iocharset=utf8,nounix,serverino,mapposix,mfsymlinks,noperm,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1,nobrl \
   testcifs2.0

3. Try a containered npm install

Attention! This may take a while to run! It also may rollback some files. This is not the error!

Failing with CIFS 3.02

docker run -it --rm -v testcifs3.02:/mnt/c --workdir $(pwd) dockerbug /bin/sh -c '$(getent passwd $(whoami) | cut -d: -f7)'
bash-4.2# mount | grep cifs
//10.0.75.1/C on /mnt type cifs (rw,relatime,vers=3.02,sec=ntlmsspi,cache=strict,username=Markus,domain=REFDESK-01,uid=0,noforceuid,gid=0,noforcegid,addr=10.0.75.1,file_mode=0777,dir_mode=0777,iocharset=utf8,nounix,serverino,mapposix,nobrl,mfsymlinks,noperm,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1)
npm install node-sass
[...]
Error: EINVAL: invalid argument, open '/mnt/c/Users/Markus/Desktop/docker-containered/node_modules/node-sass/package.json'
    at Error (native)
    at Object.fs.openSync (fs.js:642:18)
    at Object.fs.readFileSync (fs.js:510:33)
    at Object.Module._extensions..json (module.js:592:20)
    at Module.load (module.js:494:32)
    at tryModuleLoad (module.js:453:12)
    at Function.Module._load (module.js:445:3)
    at Module.require (module.js:504:17)
    at require (internal/module.js:20:19)
    at Object.<anonymous> (/mnt/c/Users/Markus/Desktop/docker-containered/node_modules/node-sass/lib/extensions.js:7:9)
[...]
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! node-sass@4.9.2 install: `node scripts/install.js`
npm ERR! Exit status 1
npm ERR! 
npm ERR! Failed at the node-sass@4.9.2 install script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     /root/.npm/_logs/2018-07-26T13_49_04_149Z-debug.log

exit code 1
/root/.npm/_logs/2018-07-26T13_49_04_149Z-debug.log
33833 verbose stack Error: node-sass@4.9.0 install: `node scripts/install.js`
33833 verbose stack Exit status 1
33833 verbose stack     at EventEmitter.<anonymous> (/usr/local/share/.config/yarn/global/node_modules/npm/node_modules/npm-lifecycle/index.js:304:16)
33833 verbose stack     at emitTwo (events.js:106:13)
33833 verbose stack     at EventEmitter.emit (events.js:191:7)
33833 verbose stack     at ChildProcess.<anonymous> (/usr/local/share/.config/yarn/global/node_modules/npm/node_modules/npm-lifecycle/lib/spawn.js:55:14)
33833 verbose stack     at emitTwo (events.js:106:13)
33833 verbose stack     at ChildProcess.emit (events.js:191:7)
33833 verbose stack     at maybeClose (internal/child_process.js:920:16)
33833 verbose stack     at Process.ChildProcess._handle.onexit (internal/child_process.js:230:5)
33834 verbose pkgid node-sass@4.9.0
33835 verbose cwd /mnt/c/Users/Markus/Projects/bugreport
33836 verbose Linux 4.9.93-linuxkit-aufs
33837 verbose argv "/usr/bin/node" "/usr/local/bin/npm" "install"
33838 verbose node v6.14.3
33839 verbose npm  v6.2.0
33840 error code ELIFECYCLE
33841 error errno 1
33842 error node-sass@4.9.0 install: `node scripts/install.js`
33842 error Exit status 1
33843 error Failed at the node-sass@4.9.0 install script.
33843 error This is probably not a problem with npm. There is likely additional logging output above.
33844 verbose exit [ 1, true ]

From a run within a different container

dmesg

[ 2828.299719] 0000 4800 53fe 424d 0040 0001 0034 c000  ...H.SMB@...4...
[ 2828.299720] 0005 0001 0009 0000 0000 0000 b470 0001  ............p...
[ 2828.299721] 0000 0000 4238 0000 0001 0000 0015 0000  ....8B..........
[ 2828.299723] 4c00 0000 3377 4dfd 4a86 50e0 64f8 2bbe  .L..w3.M.J.P.d.+
[ 2828.299723] 1d92 7f70 0009 0000 0000 0000            ..p.........
[ 2828.299757] 0000 4800 53fe 424d 0040 0001 0034 c000  ...H.SMB@...4...
[ 2828.299759] 0005 0001 0009 0000 0000 0000 b470 0001  ............p...
[ 2828.299760] 0000 0000 4238 0000 0001 0000 0015 0000  ....8B..........
[ 2828.299761] 4c00 0000 3377 4dfd 4a86 50e0 64f8 2bbe  .L..w3.M.J.P.d.+
[ 2828.299762] 1d92 7f70 0009 0000                      ..p.....
[ 2828.299766] Status code returned 0xc0000034 STATUS_OBJECT_NAME_NOT_FOUND

Successful with CIFS 2.0

docker run -it --rm -v testcifs2.0:/mnt/c --workdir $(pwd) dockerbug /bin/sh -c '$(getent passwd $(whoami) | cut -d: -f7)'
bash-4.2# mount | grep cifs
//10.0.75.1/C on /mnt/c type cifs (rw,relatime,vers=2.0,sec=ntlmsspi,cache=strict,username=Markus,domain=REFDESK-01,uid=0,noforceuid,gid=0,noforcegid,addr=10.0.75.1,file_mode=0777,dir_mode=0777,iocharset=utf8,nounix,serverino,mapposix,nobrl,mfsymlinks,noperm,rsize=65536,wsize=65536,echo_interval=60,actimeo=1)
npm install
[...]
+ node-sass@4.9.2
added 199 packages from 134 contributors and audited 531 packages in 45.896s
found 4 moderate severity vulnerabilities
  run `npm audit fix` to fix them, or `npm audit` for details

real    0m46.728s
user    0m13.020s
sys 0m6.630s

exit code 0

Ubuntu Bionic: HD Graphics 520 i915 configuration

/etc/default/grub

GRUB_DEFAULT=0
GRUB_TIMEOUT_STYLE=hidden
GRUB_TIMEOUT=10

GRUB_CMDLINE_LINUX_DEFAULT="noplymouth intel_pstate=skylake_hwp i915.enable_rc6=1 i915.enable_guc=3 i915.enable_fbc=1 i915.semaphores=1 nvme_load=YES intel_pstate=enable i915.enable_psr=1 i915.disable_power_well=0"
# GRUB_CMDLINE_LINUX="elevator=deadline"

# Uncomment to disable graphical terminal (grub-pc only)
GRUB_TERMINAL=console

# you can see them in real GRUB with the command `vbeinfo'
# GRUB_GFXMODE=1024x768x16
GRUB_GFXPAYLOAD_LINUX=1900x1080x8

xorg.conf

Section "Device"
Identifier  "Intel Graphics"
Driver      "intel"
Option      "DRI" "3"
Option      "HWRotation" "true"
Option      "Tiling" "true"
Option      "SwapBuffersWait" "0"
Option      "ReprobeOutputs" "true"
Option      "VSync" "true"
# Option      "TearFree" "true"
# Option      "AccelMode" "sna"
EndSection
Section "dri"
Mode 0666
EndSection

List supported kernel mod options

modinfo -p i915

To check which options are currently enabled, run

systool -m i915 -av

If editing /etc/modprobe.d/* don’t forget to udpate ramdisk! Or use kernel params in GRUB.

Kernel Housekeeper Update Script

I use that script for Kernel Housekeeping since I’m mostly on mainline kernel. The script is currently used with Ubuntu Bionic Beaver.

#!/bin/bash

function version_gt() { test "$(echo "$@" | tr " " "\n" | sort -V | head -n 1)" != "$1"; }
function version_le() { test "$(echo "$@" | tr " " "\n" | sort -V | head -n 1)" == "$1"; }
function version_lt() { test "$(echo "$@" | tr " " "\n" | sort -rV | head -n 1)" != "$1"; }
function version_eq() { test "$(echo "$@" | tr " " "\n" | sort -rV | head -n 1)" == "$1"; }
#if version_gt $LATEST_KERNEL_VERSION_SHORT $CURRENT_KERNEL_VERSION_SHORT; then
#   echo "$LATEST_KERNEL_VERSION_SHORT is greater than $CURRENT_KERNEL_VERSION_SHORT"
# fi

OLD_IFS="$IFS"

IGNORE_PACKAGES="linux-headers-generic|linux-image-generic"
DUMP="$( curl -s "http://kernel.ubuntu.com/~kernel-ppa/mainline/?C=M;O=A" )"
HAS_INTERNET=0
if [ "$?" -eq "0" ] &amp;&amp; [ -n "$DUMP" ]; then
LATEST_KERNEL_VERSION="$( echo "$DUMP" | sed 's/&lt;\/*[^&gt;]*&gt;//g' | grep -E -o "^v[^/]+" |  sort -V  | grep -v '\-rc' | tail -1 | sed -r 's/^v//g' )"
LATEST_KERNEL_VERSION_SHORT="$( echo $LATEST_KERNEL_VERSION | tr '.-' '.' | cut -d. -f1-3 )"
HAS_INTERNET=1
else
LATEST_KERNEL_VERSION="Unable to resolve"
fi

# Test if we have a generic kernel version
GENERIC_KERNEL_VERSION=$( dpkg-query -W -f'${db:Status-Abbrev} ${Package} ${Version}\n'  | grep -e '^ii' | grep 'linux-image-generic' | awk '{ print $3 }'  )
echo "Generic Kernel Version"
echo "======================"
if [ -n "$GENERIC_KERNEL_VERSION" ]; then
GENERIC_KERNEL_VERSION_SHORT="$( echo $GENERIC_KERNEL_VERSION | tr '.-' '.' | cut -d. -f1-3 )"
echo "$GENERIC_KERNEL_VERSION_SHORT ($GENERIC_KERNEL_VERSION)"
IGNORE_PACKAGES="$IGNORE_PACKAGES|linux-.*-$GENERIC_KERNEL_VERSION_SHORT-"
# echo $IGNORE_PACKAGES;
else
echo "Not installed".
fi
echo

CURRENT_KERNEL_VERSION=$( uname -r | cut -d- -f1,2 )
CURRENT_KERNEL_VERSION_SHORT="$( echo $CURRENT_KERNEL_VERSION | tr '.-' '.' | cut -d. -f1-3 )"
echo Current Kernel
echo ==============
echo "$CURRENT_KERNEL_VERSION ($CURRENT_KERNEL_VERSION_SHORT)"
echo

echo "Latest Kernel (stable)"
echo "======================"
echo "$LATEST_KERNEL_VERSION ($LATEST_KERNEL_VERSION_SHORT)"
echo

IGNORE_PACKAGES="$IGNORE_PACKAGES|linux-.*$CURRENT_KERNEL_VERSION-"

# TODO add /boot/efi support
echo "Partitions "
echo ==========
BOOT_PARTITIONS="$( df -h --output=target | grep "/boot" | tr '\n' ':' )"
if [ -z "$BOOT_PARTITIONS" ]; then
echo "No special partitions."
else
IFS=':'
for boot_partition in $BOOT_PARTITIONS; do
boot_part_total=$( df $boot_partition -h --output=size | tail -n 1 | tr -d '[:space:]' )
boot_part_avail=$( df $boot_partition -h --output=avail | tail -n 1 | tr -d '[:space:]' )
echo
echo "$boot_partition size=$boot_part_total, free=$boot_part_avail"
# Installed kernels

if [ ! "$( echo $boot_partition | grep efi &gt; /dev/null 2&gt;&amp;1; echo $? )" -eq 0 ]; then
INSTALLED="$( test -r ${boot_partition} &amp;&amp; ls -tr ${boot_partition}/{vmlinuz,initrd.img}-* | cut -d- -f2 | sort | uniq | tr '\n' ':'  )"
IFS=':'
for version in $INSTALLED; do
echo -n " $version taking "
SIZE=$( cd $boot_partition; ls -1  | grep "\-${version}-" | xargs du -shc | tail -1 | cut -f 1 )
echo $SIZE
done
fi;
done;
fi
echo

echo "Installed packages"
echo "=================="
echo

ALL_INSTALLED_PACKAGES=$( dpkg-query -W -f'${db:Status-Abbrev} ${Package} ${Version}\n'  | grep -e '^ii' | grep -e 'linux-image\|linux-signed-image\|linux-headers' | awk '{print $2}' | sort -V  )
echo $ALL_INSTALLED_PACKAGES
echo

echo "Removeable packages"
echo "==================="
echo

# config files
REMOVEABLE_PACKAGES="$( dpkg-query -W -f'${db:Status-Abbrev} ${Package} ${Version}\n'  | grep -e '^rc' | grep -e 'linux-image\|linux-signed-image\|linux-headers' | awk '{print $2}' | sort | grep -v -E "$IGNORE_PACKAGES" )"

for tag in "linux-image" "linux-headers" "linux-signed-image" "linux-image-extra"; do
# echo c="$tag-$CURRENT_KERNEL_VERSION-";
packages_to_be_removed="$( echo $ALL_INSTALLED_PACKAGES | grep $tag | sort -V | awk 'index($0,c){exit} //' c="$tag-$CURRENT_KERNEL_VERSION" | grep -v -E "$IGNORE_PACKAGES" )"
REMOVEABLE_PACKAGES="$( echo -e "$REMOVEABLE_PACKAGES\n$packages_to_be_removed" | sed '/^$/d' | sort -V | uniq )"
done;

if ! [ $(id -u) = 0 ]; then
echo "You need to be root! Aborting..."
exit 1
fi

if [ -z "$REMOVEABLE_PACKAGES" ]; then
echo "No packages to remove found!"
echo
else
echo $REMOVEABLE_PACKAGES
read -p "Remove (y/n)?" CHOICE
echo

case "$CHOICE" in
y|Y )
CMD="apt-get remove --purge $( echo $REMOVEABLE_PACKAGES | tr '\n' ' ' )"
# echo $CMD
eval "$CMD"
apt-get autoremove
echo
;;
* ) ;;
esac
fi

##
# Install packages from latest kernel
if [ $HAS_INTERNET == 1 ]; then

echo "Kernel upgrade"
echo "=============="
echo

LATEST_INSTALLED=$( echo $ALL_INSTALLED_PACKAGES | grep "linux-image-$LATEST_KERNEL_VERSION-" )
if [ -n "$LATEST_INSTALLED" ]; then
echo "$LATEST_KERNEL_VERSION already installed. No need to update."
echo
else
PACKAGES="$( curl -s "http://kernel.ubuntu.com/~kernel-ppa/mainline/v$LATEST_KERNEL_VERSION_SHORT/" |  grep -o -E 'linux-[_[:alnum:]\.\-]+.deb' | sort | uniq )"
# echo $PACKAGES
MACHINE_TYPE=`uname -m`
if [ ${MACHINE_TYPE} == 'x86_64' ]; then
ARCH="amd64"
else
ARCH="i386"
fi
INSTALL_PACKAGES="$( echo $PACKAGES | grep -E "_($ARCH|all)" |  grep -v lowlatency )"
if [ -z "$INSTALL_PACKAGES" ]; then
echo "No packages to install found. Check your script for URL errors!" 1&gt;&amp;2
exit 1
fi

echo "Following packages will be installed:"
echo $INSTALL_PACKAGES
echo
read -p "Continue (y/n)?" CHOICE
echo

# echo $INSTALL_PACKAGES
# exit
TMP=$( mktemp -d --suffix=.kernel-$LATEST_KERNEL_VERSION )
cd $TMP

IFS=$'\n'
URLS=""
for package in $INSTALL_PACKAGES; do
echo -n Downloading $package...
url="http://kernel.ubuntu.com/~kernel-ppa/mainline/v$LATEST_KERNEL_VERSION/$package"
wget  -q "$url"
if [ "$?" -eq 0 ]; then
echo done.
else
echo failed. aborting.
exit 1
fi
done
if [ -n "$INSTALL_PACKAGES" ]; then
dpkg -i *.deb
fi
fi
fi