git: reducing repository size (gc and destructive)

Garbage Collection (non-destrucive)

This espeicaly goes well with when removing a file added in the most recent unpushed commit. Git Garbage Collection automates some of those cleanup jobs:

I ran the following over my source folders:

for gitPath in $(find . -type d -name ".git" -readable -prune -exec realpath {} \; 2>/dev/null); do
    cd $gitPath
    echo ${gitPath}
    # git branch
    sizeBefore=$(du -sh . | cut -f1)
    git fetch -p
    git branch --format "%(refname:short)" | grep -vE "^(develop|master|staging)$"  | xargs git branch -D
    git gc --aggressive --prune=now
    sizeAfter=$(du -sh . |  cut -f1)
    echo "${sizeBefore} -› ${sizeAfter}"
done

Reduce repositoy size (destructive)

Atlassion has a pretty good article about reducing git repository size. Also take a look at Git Help: removing-sensitive-data-from-a-repository and the GIT BFG.

Trackback

AWS sync is not reliable!

While migrating from s3cmd to aws s3 cli i noticed that files don’t yet sync when using aws cli.

I tested so far with different versions and they all revealed the same behavior:

  • python2.7-awscli1.9.7
  • python2.7-awscli1.15.47
  • python3.6-awscli1.15.47

Test-Setup

  1. Setup AWS CLI utility and configure your credentials
  2. Create a testing S3 bucket
  3. Setup some random files
    bash
    #create 10 radnom files á 10MB
    for i in {1..10}; do dd if=/dev/urandom of=multi/part-$i.out bs=1MB count=10; done;
    # then copy the first 5 files over
    mkdir multi-changed
    cp -r multi/part-{1,2,3,4,5}.out multi-changed
    # and replace the content i 5 files
    for i in {6..10}; do dd if=/dev/urandom of=multi-changed/part-$i.out bs=1MB count=10; done;

Testing S3 sync with aws cli

Cleanup

$ aws s3 rm s3://l3testing/multi --recursive 

Inital sync

$ aws s3 sync multi s3://l3testing/multi
upload: multi/part-1.out to s3://l3testing/multi/part-1.out         
upload: multi/part-3.out to s3://l3testing/multi/part-3.out      
upload: multi/part-2.out to s3://l3testing/multi/part-2.out      
upload: multi/part-4.out to s3://l3testing/multi/part-4.out      
upload: multi/part-10.out to s3://l3testing/multi/part-10.out    
upload: multi/part-5.out to s3://l3testing/multi/part-5.out      
upload: multi/part-6.out to s3://l3testing/multi/part-6.out      
upload: multi/part-8.out to s3://l3testing/multi/part-8.out      
upload: multi/part-7.out to s3://l3testing/multi/part-7.out      
upload: multi/part-9.out to s3://l3testing/multi/part-9.out  

Update files

Only 5 files should now be uploaded. Timestamps for all 10 files should be changed.

$ aws s3 sync multi-changed/ s3://l3testing/multi/

ERROR: No files synced!

Testing with s3cmd

Cleanup

$ aws s3 rm s3://l3testing/multi --recursive 

Inital sync

$ s3cmd sync -v --check-md5 multi-changed/  s3://l3testing/multi/
s3cmd sync --delete-removed multi/  s3://l3testing/multi/ 
upload: 'multi/part-1.out' -> 's3://l3testing/multi/part-1.out'  [1 of 10]
 10000000 of 10000000   100% in    1s     5.12 MB/s  done
upload: 'multi/part-10.out' -> 's3://l3testing/multi/part-10.out'  [2 of 10]
 10000000 of 10000000   100% in    1s     7.54 MB/s  done
upload: 'multi/part-2.out' -> 's3://l3testing/multi/part-2.out'  [3 of 10]
 10000000 of 10000000   100% in    1s     8.60 MB/s  done
upload: 'multi/part-3.out' -> 's3://l3testing/multi/part-3.out'  [4 of 10]
 10000000 of 10000000   100% in    1s     7.17 MB/s  done
upload: 'multi/part-4.out' -> 's3://l3testing/multi/part-4.out'  [5 of 10]
 10000000 of 10000000   100% in    1s     7.72 MB/s  done
upload: 'multi/part-5.out' -> 's3://l3testing/multi/part-5.out'  [6 of 10]
 10000000 of 10000000   100% in    1s     8.19 MB/s  done
upload: 'multi/part-6.out' -> 's3://l3testing/multi/part-6.out'  [7 of 10]
 10000000 of 10000000   100% in    1s     7.60 MB/s  done
upload: 'multi/part-7.out' -> 's3://l3testing/multi/part-7.out'  [8 of 10]
 10000000 of 10000000   100% in    1s     7.73 MB/s  done
upload: 'multi/part-8.out' -> 's3://l3testing/multi/part-8.out'  [9 of 10]
 10000000 of 10000000   100% in    1s     7.52 MB/s  done
upload: 'multi/part-9.out' -> 's3://l3testing/multi/part-9.out'  [10 of 10]
 10000000 of 10000000   100% in    1s     8.31 MB/s  done
Done. Uploaded 100000000 bytes in 12.9 seconds, 7.38 MB/s.

Now update the files

Only 5 files should now be uploaded. Timestamps for all 10 files should be changed.

s3cmd sync  --delete-removed multi-changed/  s3://l3testing/multi/ 
upload: 'multi-changed/part-10.out' -> 's3://l3testing/multi/part-10.out'  [1 of 5]
 10000000 of 10000000   100% in    1s     5.97 MB/s  done
upload: 'multi-changed/part-6.out' -> 's3://l3testing/multi/part-6.out'  [2 of 5]
 10000000 of 10000000   100% in    1s     9.45 MB/s  done
upload: 'multi-changed/part-7.out' -> 's3://l3testing/multi/part-7.out'  [3 of 5]
 10000000 of 10000000   100% in    1s     9.18 MB/s  done
upload: 'multi-changed/part-8.out' -> 's3://l3testing/multi/part-8.out'  [4 of 5]
 10000000 of 10000000   100% in    1s     8.81 MB/s  done
upload: 'multi-changed/part-9.out' -> 's3://l3testing/multi/part-9.out'  [5 of 5]
 10000000 of 10000000   100% in    1s     8.79 MB/s  done
Done. Uploaded 50000000 bytes in 5.8 seconds, 8.17 MB/s.

Note: s3cmd also supports --dry-run.

SUCCESS: File content got updated…
WARNING: ..timestamps not

Analysis

Summary

Using --debug and aws s3api list-objects --bucket l3testing reveals that objects are stored as storage-class=STANDARD and do have their hashes.

Using aws cli --exact-timestamps, --delete and the payload_signing_enabled-option did change nothing.

Looking at the sync strategies (search for syncstrategy) within the aws cli sources reveals that they really shitty and as github issues reveal, that they are still doing a lot of unecessary things. Stackoverflow and Github reveals that there are several issues, also when syncing files over 5GB.

AWS Default sync fails MD5 #facepalm

We also get this when checking with s3cmd after an inital aws cli sync:

$ s3cmd sync -v --dry-run  multi-changed/  s3://l3testing/multi/
INFO: No cache file found, creating it.
INFO: Compiling list of local files...
INFO: Running stat() and reading/calculating MD5 values on 10 files, this may take some time...
INFO: Retrieving list of remote files for s3://l3testing/multi/ ...
INFO: Found 10 local files, 10 remote files
INFO: Verifying attributes...
INFO: disabled md5 check for part-1.out
INFO: disabled md5 check for part-10.out
INFO: disabled md5 check for part-2.out
INFO: disabled md5 check for part-3.out
INFO: disabled md5 check for part-4.out
INFO: disabled md5 check for part-5.out
INFO: disabled md5 check for part-6.out
INFO: disabled md5 check for part-7.out
INFO: disabled md5 check for part-8.out
INFO: disabled md5 check for part-9.out
INFO: Summary: 0 local files to upload, 0 files to remote copy, 0 remote files to delete
INFO: Done. Uploaded 0 bytes in 1.0 seconds, 0.00 B/s.

Also, wehen we use the s3cmd for initial sync, aws cli also won’t be able to do a sync.

AWS CLI internaly uses boto3 and aws s3api CreateMultipartUploadTaskInspecting for multipart-uploads. MD5 checksums for the consolidated uploaded parts are correctly transferred but somehow not stored.

Better solutions?

Tooling

Sure! My choice would be s4cmd which does the sync perfectly and is currently as fast as node-s3-cli. AWS CLI is currently as fast but well has faulty sync. node-s3-cli is baded on node and it’s said they still have some issues.

Performance

Activating the fast bucket option at AWS console just serves more reliable connections (less latency). This can range about [-7%, -1%, 1%, %1, %2, %3, 7%] speed improvements for some lcoations. I soemtiems can observe that when using too many connections it can hang a bit. Yet, I do not recommand to pay for that micro-option since multi-part uploads with files consolidated an the client side should be standard for HTTPS S3 API.

Further notes

AWS just does MD5 which should be sufficient for most files (yet I had md5 collisions in my life as developer!)

From their documentation

--payload_signing_enabled Refers to whether or not to SHA256 sign sigv4 payloads. By default, this is disabled for streaming uploads (UploadPart and PutObject) when using https.

Trackback

Infojunk September 2018

Kotlin

Python

Markdown Notestaking

Some notestaking apps you should give a try. At least Notion is very promising (yet you have to pay)

Note: In the end I go with Visual Studio Code and it’s Markdown Editors. Boostnote was the best free application (yet with bugs). Notion is the best paid app matching for my requirements 😉

Web UX

Markdown WYSIWYM editors

Linux

  • ASSH Go wrapper around SSH with automated hops and gateways
  • USB Power Saving (Thinkpad)
  • [List of Linux Monitoring Tools])https://www.ubuntupit.com/most-comprehensive-list-of-linux-monitoring-tools-for-sysadmin/)
  • Tracktion7
  • OS Query use to query system resources by Facebook

Productivity

  • CodeStream – make working on soruce code collaborative (intelligent and live comments 😉

Web Scraping and Acceptance Testing

Forget PhantomJS or Selenium! Nightmare is the shit if you wanna quickly scrap data or need a background browser. Of course Acceptance Testing should be done with WebDriveIO.

Android Apps

Other

Connecting to Checkpoint QVPN SXN in Linux

Prerequisites

Ensure you have received their E-Mail and following information:

  • VPN Certificate file (.p12)
  • Your VPN password
  • Your server username

Please use that information to replace placeholders in scripts found in this tutorial.

Installation script

You can either download from their website (crappy and frustrating) or get it directly via http://gateway-ip.

Look for a file called snx_install_linux**.sh

wget http://gateway-ip/**/snx_install_linux**.sh

Security: We have a look what is distributed and how running it will affect our system

$ cat snx_install_linux30-7075.sh | sed -e 's/^.*\(\x42\x5A.*\)/\1/g' >| tar -jtvf
-rwxr-xr-x builder/fw 3302196 2012-12-06 14:02 snx
-r--r--r-- builder/fw 747 2012-12-06 14:02 snx_uninstall.sh

Installation

$ sudo chmod +x snx_install_linux30-7075.sh
$ sudo ./snx_install_linux30-7075.sh

You may have some libraries missing since the client is still 32bit.

$ sudo ldd /usr/bin/snx | grep "not found"
libpam.so.0 => not found
libstdc++.so.5 => not found

So, here we would need some legacy architecture

$ sudo apt-get install libx11-6:i386 libstdc++5:i386 libpam0g:i386

Connect to VPN

$ snx -c path-to-key/rl_johnbarleycorn.p12 -g -s company.inetservices.com companyvpn
Check Point's Linux SNX
build 800007075
Please enter the certificate's password:
SNX authentication:
Please confirm the connection to gateway: companyvpn VPN Certificate
Root CA fingerprint: MELT ELSE FUN BLUE ONUS GORE GAD SWAM VAST CHAT YAWL FOUR
Do you accept? [y]es/[N]o:
y
SNX - connected.
Session parameters:
===================
Office Mode IP : 172.16.10.145
Timeout : 12 hours</username>

(exit code 0)

Debugging

$ ssh -vvv vq
debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password
[…]

Check what it did setup

$ ifconfig | grep -A 8 tunsnx
tunsnx: flags=4305<up,pointopoint,running,noarp,multicast> mtu 1500
inet 172.16.10.145 netmask 255.255.255.255 destination 172.16.10.144
inet6 fe80::ed2a:98f2:a47:8555 prefixlen 64 scopeid 0x20                    <link>
 unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 100 (UNSPEC)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 25 bytes 2252 (2.2 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0</up,pointopoint,running,noarp,multicast>

And for the routes:

$ routes | grep tunsnx                                           :(
Kernel-IP-Routentabelle
Ziel            Router          Genmask         Flags Metric Ref    Use Iface
10.7.5.0        0.0.0.0         255.255.255.0   U     0      0        0 tunsnx
10.7.6.0        0.0.0.0         255.255.255.0   U     0      0        0 tunsnx
10.8.4.0        0.0.0.0         255.255.254.0   U     0      0        0 tunsnx
10.8.6.0        0.0.0.0         255.255.255.0   U     0      0        0 tunsnx
10.14.14.15     0.0.0.0         255.255.255.255 UH    0      0        0 tunsnx
10.14.14.15     0.0.0.0         255.255.255.255 UH    2      0        0 tunsnx
10.200.1.12     0.0.0.0         255.255.255.255 UH    0      0        0 tunsnx
10.200.1.12     0.0.0.0         255.255.255.255 UH    2      0        0 tunsnx
10.200.13.0     0.0.0.0         255.255.255.0   U     0      0        0 tunsnx
10.200.13.0     0.0.0.0         255.255.255.0   U     2      0        0 tunsnx
10.200.14.0     0.0.0.0         255.255.255.0   U     0      0        0 tunsnx
10.200.14.0     0.0.0.0         255.255.255.0   U     2      0        0 tunsnx
10.200.28.9     0.0.0.0         255.255.255.255 UH    0      0        0 tunsnx
10.200.28.9     0.0.0.0         255.255.255.255 UH    2      0        0 tunsnx
10.200.29.0     0.0.0.0         255.255.255.0   U     0      0        0 tunsnx
10.200.29.0     0.0.0.0         255.255.255.0   U     2      0        0 tunsnx
172.16.10.68    0.0.0.0         255.255.255.255 UH    0      0        0 tunsnx

Automating connection

./snx-vpn-up:

#!/bin/bash

# trap ctrl-c and call ctrl_c()
trap ctrl_c INT

function ctrl_c() {
  snx -d
}

showroutes() {
  echo Routes:
  echo =======
  ip route | grep tunsnx
  if [ "$?" -ne 0 ]; then
    echo "Something failed. No routes? Try again."
    echo
    snx-vpn-down
    exit 1
  fi
}

ROUTES=$( ip route | grep tunsnx )
if [ ! -z "$ROUTES" ]; then
   echo "Already connected."
   echo
   showroutes
   exit 1
fi

echo "SNX - Connecting..."
echo 'PASSWORD' | snx -g -c path-to-key/rl_johnbarleycorn.p12  -s IP
sleep 1
showroutes
sleep 1
echo
echo /home/$( whoami )/snx.elg
echo =====
tail -n 1000 -f /home/$( whoami )/snx.elg

If this stops working at any point in future use expect

./snx-vpn-down:

#!/bin/bash
if [ -z "$( pgrep snx)" ]; then
  echo "SNX was not running."
  exit 1
fi

snx -d

Trackback

GitLab: checkout all available repositories

Generate a private token

https://<GITLAB-SERVER1>/profile/personal_access_tokens
https://<GITLAB-SERVER2>/profile/personal_access_tokens

Checkout a list of all available repositories

QUERY='.[] | .path_with_namespace + "\t" + .ssh_url_to_repo' # JQ Query
curl --request GET --header "PRIVATE-TOKEN: <PRIVATE-TOKEN>" "<GITLAB-SERVER1>/api/v4/projects?simple=true&per_page=65536" | jq -r $QUERY > repo.list
curl --request GET --header "PRIVATE-TOKEN: <PRIVATE-TOKEN>"" "<GITLAB-SERVER2>/api/v3/projects?simple=true&per_page=65536" | jq -r $QUERY >> repo.list

Create directories for repositories

cat repo.list | cut -f1 | xargs mkdir p-

Checkout projects (with GNU parallel)

parallel --colsep '\t' --jobs 4 -a repo.list git clone {2} {1}

Build list of git repositories

find -type d -name ".git"  | xargs realpath | xargs dirname > path.list  

Report repository branch or checkout branch

cat path.list | xargs -I{} sh -c "cd {}; echo {}; git branch"
cat path.list | xargs -I{} sh -c "cd {}; echo {}; git checkout master"
cat path.list | xargs -I{} sh -c "cd {}; echo {}; git checkout develop"

Note: when you are migrating repositoires you should use git clone --mirror

Update: try adding get all available repositories. if you don’t get all projects and just get 404 you’re fucked. Try creating the list from what you see browsing GitLab or try to get Admin-Access.

Infojunk August 2018

Linux

It’s about responsiveness – not about best performance!

Apache

Hardware

Coding

Python

Yes, deep dive into Python. And I don’t like it. As well as PHP. Do Rust. Trust me!

Math

AutoFS: Indirect User-Automounter for AWS S3 using either s3fs or goofyfs

I recently discovered the benefits with autofs and struggled with some issues on mounting S3 buckets. I didn’t find anything similar so I wrote auto.s3 which is now capable of using FUSE s3fs and goofyfs.

auto.s3 uses AWS CLI and jq to resolve a user-mountspace to /home/<user>/Remote/S3/<aws-profile>/<bucket>/** using correct file and directory permissions.

The scripts currently run on my Ubuntu Bionic Beaver but it should be possible to use it on other distributions without minimal work. For OSX – nah… pay me!

Please read the comments included in the files!

/etc/auto.s3

#!/bin/bash

##
# AutoFS user-folder indirect Automounter for S3 using either FUSE goofyfs or s3fs (0.1)
# 
# ----------------------------------------------------------------------------
# "THE FUCK-WARE LICENSE" (Revision 1):
# <mg@evolution515.net> wrote this file. As long as you retain this notice you
# can do whatever you want with this stuff. If we meet some day, and you think
# this stuff is worth it, you can buy me a beer in return or have sex with me, 
# or both.
# ----------------------------------------------------------------------------
#
# Requirements
# - AWS CLI installed
# - JQ installed
# - Either FUSE goofyfs or s3fs installed
#
# Usage
#  - place config to $S3FS_CONFIG directory using s3fs config format (ACCESS_KEY:ACCESS_SECRET)
#  - place this file to /etc/auto.s3 and make it executable
#  - add to /etc/auto.master: /home/<user>/Remote/S3 /etc/auto.s3 --timeout=3000
#  - choose backend by config section in this file (NOTE: goofyfs needs )
#  - cd <mountpoint>/<aws-profile>/<bucket>
# 
# Debugging
# - Stop system service by: 
#   systemctl stop autofs
# - Execute as process (use --debug to see mount commands)
#   automount -f -v 
#
# Clean up mountpoints (when autofs hangs or mountpoints are still used)
#   mount | grep autofs | cut -d' ' -f 3 | xargs umount -l 
#    
# Logging
# - Logs go to syslog except you are running automount within TTY
# 
# Notes
# - goofyfs makes sometimes trouble - use s3fs!
# - Daemon needs to run by root since we only root has access to all mount options
# - Additional entries can be defined with the -Dvariable=Value map-option to automount(8).
# - Alternative fuse style mount can be done by -fstype=fuse,allow_other :sshfs\#user@example.com\:/path/to/mount
# - We do not read out .aws/config since not all credentials do necessary have S3-access
# - https://github.com/kahing/goofys/pull/91/commits/07dffdbda4ff7fc3c538cb07e58ad12cc464b628
# - goofyfs catfs cache is not activated by default
# - chown/chmod is not that nyce but works ;9
# - other backends not planned at the moment
#
# AWS Commands
# - aws s3api list-buckets
# - aws s3api list-objects --bucket <bucket>
#
# FAQ
# -  https://github.com/s3fs-fuse/s3fs-fuse/wiki/FAQ
#
# Autofs provides additional variables that are set based on the user requesting the mount:
# 
#   USER   The user login name
#   UID    The user login ID
#   GROUP  The user group name
#   GID    The user group ID
#   HOME   The user home directory
#   HOST   Hostname (uname -n)
#
# From exports
#
#   AUTOFS_GID="1000"
#   AUTOFS_GROUP="ctang"
#   AUTOFS_HOME="/home/ctang"
#   AUTOFS_SHOST="refpad-16"
#   AUTOFS_UID="1000"
#   AUTOFS_USER="ctang"
#

# Strict mode
set -euo pipefail -o errtrace

# Config
S3FS_CONFIG="${AUTOFS_HOME:-$HOME}/.autofs/s3fs" # user directory
BACKEND="goofyfs" # s3fs|goofyfs - not goofyfs requires goofyfs-fuse!
DEBUG=0 # 0|1 where 1 is on - output will go to syslog or journald
UMASK="750" # Umask for mountpoint placeholder directories
OPTS="defaults,noatime" # mount options
if [[ -z "${GID:-}" ]]; then
    GID="$(id -g)"
fi

# We ensure every command output can be parsed in neutral form
export LC_ALL=C
export AWS_SDK_LOAD_CONFIG=0

# Const
PWD="$(pwd)"
SCRIPT_NAME=`basename "$0"`
LOGGER_CMD="logger -i -t ${SCRIPT_NAME}"
if test -t 1; then 
    # if tty
    LOGGER_CMD="${LOGGER_CMD}  --no-act --stderr"
fi
PROFILES=()

if ! which jq 1>/dev/null 2>&1; then
     $LOGGER_CMD "Cannot find jq binary"
     exit 1
fi

if ! which aws 1>/dev/null 2>&1; then
     $LOGGER_CMD "Cannot find aws binary"
     exit 1
fi

# If use is already in a mount point this script will be called by root
# so we need to remap some stuff
if [[ ! "${HOME:-}" == "${PWD}/"* ]] && [[ "${PWD}" =~ ^(/home/[^/]+) ]]; then
    S3FS_CONFIG=${S3FS_CONFIG/${AUTOFS_HOME:-$HOME}/${BASH_REMATCH[1]}}
    HOME="${BASH_REMATCH[1]}"
    USER="${HOME##*/}"
    AUTOFS_UID="$(id -u ${USER})"
    AUTOFS_GID="$(id -g ${USER})"
    $LOGGER_CMD "Initializing. Remapping home to ${HOME}, user=${USER}, config=${S3FS_CONFIG}"
fi

# Prevent errors
if [[ ! -d ${S3FS_CONFIG} ]]; then
     $LOGGER_CMD "Config directory ${S3FS_CONFIG} not found."
     exit 1
fi

# Mountpoint needs to be owned by user 
chown -R ${AUTOFS_UID:-$UID}:${AUTOFS_GID:-$GID} "${S3FS_CONFIG}"
chmod -R 700 "${S3FS_CONFIG}"

# Create indirect mount points for s3 profiles
PROFILES=($(ls -1 ${S3FS_CONFIG}))
if [[ -z "${PROFILES[*]}" ]]; then
    $LOGGER_CMD "No profiles found within ${S3FS_CONFIG}"
else
    for profile in "${PROFILES[@]}"; do
        chmod 600 ${S3FS_CONFIG}/${profile}
        if [[ ! -d "${PWD}/${profile}" ]]; then
            $LOGGER_CMD "Creating ${PWD}/${profile}"
            mkdir -p "${PWD}/${profile}"  || true > /dev/null
            chmod ${UMASK} "${PWD}/${profile}"
            chown ${AUTOFS_UID:-$UID}:${AUTOFS_GID:-$GID} "${PWD}/${profile}"
        fi
    done
fi

# Requested profile
PROFILE="${1:-}"
if [[ ! -e "${S3FS_CONFIG}/${PROFILE}" ]]; then
    $LOGGER_CMD "No valid profile=${PROFILE} given! "
    exit 1
fi
$LOGGER_CMD "Profile: $@"
if [[ -z "${PROFILE}" ]]; then
    $LOGGER_CMD "No profile given" 
    exit 1
fi

if [[ "${BACKEND}" == "s3fs" ]]; then
    if ! which s3fs 1>/dev/null 2>&1; then
        $LOGGER_CMD "Cannot find s3fs installation"
        exit 1
    fi
    OPTS="-fstype=fuse.s3fs,uid=${AUTOFS_UID:-${UID}},gid=${AUTOFS_UID:-${GID}},umask=000,${OPTS},_netdev,allow_other,default_permissions,passwd_file=${S3FS_CONFIG}/${PROFILE},use_cache=$(mktemp -d)"
    if [[ "$DEBUG" -eq 1 ]]; then
        OPTS="${OPTS},dbglevel=info,curldbg"
    fi
elif [[ "${BACKEND}" == "goofyfs" ]]; then
    if ! which s3fs 1>/dev/null 2>&1; then
        $LOGGER_CMD "Cannot find goofyfs installation"
        exit 1
    fi
    OPTS="-fstype=fuse.goofyfs-fuse,${OPTS},_netdev,nonempty,allow_other,passwd_file=${S3FS_CONFIG}/${PROFILE},--file-mode=0666,nls=utf8"
    if [[ "${DEBUG}" -eq 1 ]]; then
        OPTS="${OPTS},--debug_s3,--debug_fuse"
    fi
else
    $LOGGER_CMD "Unsupported backend ${BACKEND}"
    exit 1
fi

read  -r -d '' CREDENTIALS < ${S3FS_CONFIG}/${PROFILE}
export AWS_ACCESS_KEY_ID="${CREDENTIALS%%:*}"
export AWS_SECRET_ACCESS_KEY="${CREDENTIALS##*:}"
BUCKETS=($(aws s3api list-buckets --output json | jq -r '.Buckets[].Name'))
printf "%s\n" "${BUCKETS[@]}" | awk -v "opts=${OPTS}" -F '|' -- '
    BEGIN { ORS=""; first=1 }
    {
          if (first)
            print opts; first=0
          bucket = $1
          # Enclose mount dir and location in quotes
          # Double quote "$" in location as it is special
          gsub(/\$$/, "\\$", loc);
          gsub(/\&/,"\\\\&",loc)
          print " \\\n\t \"/" bucket "\"", "\":" bucket "\""
          # print " \\\n\t " bucket, ":"bucket
        }
    END { if (!first) print "\n"; else exit 1 }
    '

/usr/local/bin/goofyfs-fuse

#!/bin/bash

## 
# GoofyFS - FUSE wrapper
#
# supports passwd_file argument analogue to s3fs since goofyfs does not support custom credentials file
# (see https://github.com/kahing/goofys/pull/91/commits/07dffdbda4ff7fc3c538cb07e58ad12cc464b628)
#
# usage: mount \
#   -t fuse.goofyfs-fuse -o allow_other,--passwd_file=/home/ctang/.autofs/s3fs/<profile>,--file-mode=0666,nls=utf8,--debug_s3,--debug_fuse \
#   <bucket> \
#   <mountpoint>

# Strict mode
set -euo pipefail -o errtrace

ARGS=()
PASSWD_FILE=""
BUCKET=""
MOUNTPOINT=""

while (($#)); do
    case "$1" in
        -t)
            shift 1
            ARGS+=("-t" "fuse.goofyfs") 
            ;;
        -o)
            shift 1
            OPTS=()
            for arg in  $( echo "${1}" | tr ',' '\n' ); do
                case "${arg}" in
                    --passwd_file=*)
                        PASSWD_FILE="${arg##*=}"
                        ;;
                    --*)
                        ARGS+=($( echo "${arg}" | tr '=' '\n' ))
                        ;;
                    *)
                        OPTS+=("${arg}")
                esac
            done
            opts="$(printf "%b," ${OPTS[@]})"
            ARGS+=("-o" "${opts%,}")
            ;;
        *)
            if [[ -z "${BUCKET}" ]]; then
                BUCKET="$1"
            elif [[ -z "${MOUNTPOINT}" ]]; then
                MOUNTPOINT="$1"
            else
                ARGS+=("$1")
            fi
            ;;
    esac
    shift 1
done

ARGS+=("${BUCKET}" "${MOUNTPOINT}")

export AWS_SDK_LOAD_CONFIG=0
if [[ -n "${PASSWD_FILE}" ]]; then
    read  -r -d '' CREDENTIALS < ${PASSWD_FILE}
    export AWS_ACCESS_KEY_ID="${CREDENTIALS%%:*}"
    export AWS_SECRET_ACCESS_KEY="${CREDENTIALS##*:}"
fi

goofyfs ${ARGS[@]} 

Decoding MySQL ~/.mylogin.cnf

Little tool to decode MySQL’s badly secured login-path. It does the same like the official MySQL server tools “my_print_defaults” (based on MySQL OSS python libs).

More security can be achieved by: https://www.percona.com/blog/2016/10/12/encrypt-defaults-file/

#!/usr/bin/env php
<!--?php

$fp = fopen(getenv('HOME') . '/.mylogin.cnf', "r");
if (!$fp) {
    die("Cannot open .mylogin.cnf");
}

fseek($fp, 4);
$key = fread($fp, 20);

// generate real key
$rkey = "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0";
for ($i = 0; $i < strlen($key); $i++) {
    $rkey[$i % 16] = ($rkey[$i % 16] ^ $key[$i]);
}

$section = null;
$settings = [];

while ($len = fread($fp, 4)) {

    // as integer
    $len = unpack("V", $len);
    $len = $len[1];

    // decrypt
    $crypt = fread($fp, $len);
    $plain = openssl_decrypt($crypt, 'aes-128-ecb', $rkey, true);
    $decoded = preg_replace("/[^\\x32-\\xFFFF]/", "", $plain);
    if (preg_match('/^\[([^\]]+)]/', $decoded, $matches)) {
        $section = $matches[1];
        $settings[$section] = [];
    } elseif (preg_match('/^(\w+)=(.*)/', $decoded, $matches)) {
        $settings[$section][$matches[1]] = $matches[2];
    }
}
fclose($fp);

echo json_encode($settings, JSON_PRETTY_PRINT);

Source: https://gist.github.com/robocoder/024442d06b8a75d292d58c5884be4642

Source of mycli (Python): _mylogin_cnf of mycli

Albert Launcher 0.14: Switch Application Window Plugin

Since I really don’t like the Switcher Plugin for GNOME and I’m stick to Albert Launcher I added this extension. Maybe it will get accepted for their the python extensions.

Drop to ~/.local/share/albert/org.albert.extension.python/switch-app-window.py or other locations provided. And activate within extensions:

import re
import subprocess

from albertv0 import *

__iid__ = "PythonInterface/v0.1"
__prettyname__ = "Switch App Window"
__version__ = "1.0"
__trigger__ = "w "
__author__ = "Markus Geiger <mg@evolution515.net>"
__id__ = "window"
__dependencies__ = []

iconPath = iconLookup("go-next")

def handleQuery(query):
    stripped = query.string.strip()
    if not query.isTriggered and not stripped:
        return

    results = []
    process = subprocess.Popen(['wmctrl', '-l'], stdout=subprocess.PIPE, encoding='utf8')

    output, error = process.communicate()

    patt = re.compile(r'^(\w+)\s+(\d+)\s+([^\s]+)\s+(.+)$')
    window_re = re.compile(r'^(.+)\s+-\s+(.+)$')

    for line in output.split('\n'):
        match = patt.match(line)
        if not match:
            continue

        window_id = match.group(1)
        fulltitle = match.group(4)
        if not query.string.lower() in fulltitle.lower():
            continue

        titlematch = window_re.match(fulltitle)

        if titlematch:
            windowtitle = titlematch.group(1)
            program_title = titlematch.group(2)
        else:
            program_title = fulltitle
            windowtitle = fulltitle

        results.append(
            Item(
                id="%s_%s" % (__id__, window_id),
                icon=iconPath,
                text=program_title,
                subtext=windowtitle,
                completion=query.rawString,
                actions=[
                    ProcAction("Focus", ["wmctrl", "-ia", window_id]),
                    ProcAction("Close", ["wmctrl", "-ic", window_id])
                ]
            )
        )
    return results

git: encrypt credentials within repository

Especially in the microservices era you should use a config server and never store your credentials in repository!

You should not use git smudge/clean filters for encryption. Why? Example!

Let’s create a example repository

% TMP=$(mktemp -d)
% cd $TMP
% git init
% echo 'Hello world!' > credentials

Add .gitattributes

/credentials filter=crypto

Add .git/config

[filter "crypto"]
smudge = openssl enc -aes-256-cbc -salt 
clean = openssl enc -aes-256-cbc -salt 
require

Note: require indicates that these commands need exit code 0 otherwise it could happen that these files are added though without any encryption. You can test this by using smudge = gpg -d -q –batch –no-tty -r <SIGNATURE> and clean = gpg -ea -q –batch -no-tty -r <SIGNATURE> filters.

Add file to to stage and commit

% git add .gitattributes credentials
% git add credentials
enter aes-256-cbc encryption password:
Verifying - enter aes-256-cbc encryption password:
% git status
Changes to be committed:
    new file:   .gitattributes
    new file:   credentials
% git commit -m "Inital commit"                                                               [master (Basis-Commit) 860cedc] Initial commit
 2 files changed, 2 insertions(+)
 create mode 100644 .gitattributes
 create mode 100644 credentials

Note: The .git/config for filters will not be added to the commit!

Clone the example repository

% TMP2=$(mktemp -d)
% git clone $TMP .
% cat credentials
Salted__��:�.��1٩�H��8��[��25%% 

We now have an encrypted file without our repository. To decode it we need the clean command. On the other side anyone still can commit this file and break configuration.

Thoughts

  • Yes, you can automate credentials encoding or decoding with filters.
  • But you could also encode or decode manualy and check-in or -out. This makes not much difference!
  • GitLab provides file locks. This probably makes sense as long as you use GitLab premium. Otherwise you might use git commit hooks.
  • If you wanna use git repos to store credentuals and have them fully crypted you may have a look at [Git Annnex GCrypt](http://git-annex.branchable.com/tips/fully_encrypted_git_repositories_with_gcrypt/. Yet this is not based on git but a own project written in Haskell! It just uses the git-format!
  • You’re doomed if you store your production credentials in clear text so every developer can read it.
  • Credentials should never be stored in repositories!
  • Use config servers and tools like consul for creating configs or directly pull the configs from the servers!

So, why all that circumstances?! This easily can get messed up. This is just a badly designed »solution« . Therefore I would call it only a »workaround« and no real solution!