ClipWriter, a PowerShell script that emulates keyboard input to transfer files and text

I wrote this tool today, as I was annoyed having to do a bunch of changes to a configuration file on a client site through a restrictive remote tool that doesn’t let you paste or transfer any files for “security reasons” (read: BOFH)

It pastes text, files and entire directory structures by simulating keyboard input. I found some tools that do stuff like this, but all of them were somewhat shady and closed source. I thought someone else might enjoy this, so here it is.

Enable remote management of Windows Server Core and Hyper-V Core

This is a reference for the commands to enable the firewall rules necessary to remotely manage Windows Server Core and Hyper-V Core.

I keep having to look these up…

  • Enable-NetFireWallRule -DisplayName “Windows Management Instrumentation (DCOM-In)”
  • Enable-NetFireWallRule -DisplayGroup “Remote Event Log Management”
  • Enable-NetFireWallRule -DisplayGroup “Remote Service Management”
  • Enable-NetFireWallRule -DisplayGroup “Remote Volume Management”
  • Enable-NetFireWallRule -DisplayGroup “Remote Scheduled Tasks Management”
  • Enable-NetFireWallRule -DisplayGroup “Windows Firewall Remote Management”

You also have to run one of them on the computer you intend to manager it from. Yes, the client.

  • Enable-NetFirewallRule -DisplayGroup “Remote Volume Management”


Forcing Cygwin to create sane permissions on Windows

If you use Cygwin to mainly manipulate files in your regular Windows filesystem, under /cygdrive/…, you have probably seen this message more than a few times:

“The permissions on <node> are incorrectly ordered, which may cause some entries to be ineffective”

You have also likely seen “NULL SID” as the top entry in permission lists.

The Cygwin website has a page about filemodes, which explains why this happens.

In short, you have to edit /etc/fstab in Cygwin, and add “noacl” to the mount options for /cygdrive. Here is my /etc/fstab, for reference:

# /etc/fstab
#    This file is read once by the first process in a Cygwin process tree.
#    To pick up changes, restart all Cygwin processes.  For a description
#    see

# This is default anyway:
#none /cygdrive cygdrive binary,posix=0,user 0 0
none /cygdrive cygdrive binary,noacl,posix=0,user 0 0

After editing this option, you have to stop every single Cygwin process for it to take effect. The easy way out is to reboot your system.

Property ‘BindToHostTpm’ does not exist in class ‘Msvm_SecuritySettingData’

Microsoft has apparently messed up their integration of the Hyper-V Manager on Windows 10 with Hyper-V Server hosts, resulting in the above error message showing up on the “Security” tab of virtual machines.

So now what do you do if you want to disable Secure Boot to load some Linux distribution that doesn’t support it?

Well, we use PowerShell, remote into the Hyper-V Core Server, and disable secure boot from the CLI instead:

> Enter-PSSession <hyperHost>
> Set-VMFirmware <vm> -EnableSecureBoot off

And done! Changes take effect immediately. You can now boot your Linux goodness.

Script for creating a compressed image file from a Raspbian SD card

I’ve previously presented a manual process for doing this, but lately I’ve had to do it more often, and I figured it was about time for automation.

This script will:

  • Run e2fsck on the root file system
  • Erase logfiles, such as bash_history and stuff in /var/log
  • Erase the test.h264 video file
  • Wipe resolv.conf
  • Defragment the root file system
  • Resize the root file system to the minimum size, according to resize2fs
  • Resize the root partition to match the file system
  • Zero-fill remaining free space on both partitions
  • Create an image file of the exact length of the partitions
  • Compress the image file to .zip, using the “ultra” setting for compression

Required runtime tools include:

  • dd
  • 7z
  • resize2fs
  • dumpe2fs
  • e4defrag

This script is intended for Raspbian SD card images only, and may not work as intended with other distributions.

The script is currently hosted on github, and of course, there’s a local copy.

BONUS: A more general script for shrinking the last partition of existing image files. You might be able to draw inspiration from this.

Using external Certificate Authority certificates in a restricted or closed environment

In this example, we’ll be using a wildcard certificate from Let’s Encrypt, obtained through their recently released wildcard certificate offering.

What we’re doing

The use case is that we want, for one reason or another, to use this certificate in an environment that does not have unrestricted internet access, such as a health institution or an office dealing with sensitive data. As we’ll soon discover, this presents some challenges for clients trying to verify the authenticity of the certificates the internal servers present to them.

So what’s the problem?

When a client, such as a web browser, connects to a web server using SSL, that server presents a certificate to the client, including any intermediate certificates between the server certificate and the root certification authority. The client is expected to have, and trust, the root certificate. On Debian Linux and derivatives, the root certificates are provided in the ca-certificates package. On Windows they’re provided through Windows Update.

Contained within the properties of the provided certificate, and any intermediates, there’s most likely going to be one or more URL’s to Certificate Revocation List (CRL) distribution points and/or Online Certificate Status Protocol (OCSP) endpoints. Before it will trust the certificate provided by the server, any well implemented client will want to visit one of these services to ensure that none of the certificates in the chain have been revoked by their respective authorities.

How can I determinne the CRL and OCSP URL’s (Linux)?

We’ll do this initially on Linux, using OpenSSL on our PEM encoded certificate. For an example on Windows using a .pfx/.p12 encoded certificate, see below. See the Wikipedia article on X.509 certificates for a reference on commonly used certificate formats.

Our example certificate, provided by Let’s Encrypt and retrieved using certbot, is stored in four base64 encoded files:

cert.pem The public part of the certificate, which is passed on to clients by SSL/TLS servers on authentication
chain.pem The public part of the certificates for any intermediate certificate authorities in the chain
fullchain.pem This is simply the public certificate, followed by the chain. cert.pem and chain.pem in one file. Some configurations warrants this input.
privkey.pem The private part of the certificate, not to be given to anyone, ever. This is your key to the public part of the certificate.


To extract the CRL and OCSP URL’s we need to access for verification, we must investigate the contents of the cert.pem and chain.pem files. The OpenSSL tool, probably available in your package manager on Linux, is appropriate for the job.

$ openssl x509 -text -in cert.pem
        Version: 3 (0x2)
        Serial Number:
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3
            Not Before: Apr 25 10:30:02 2018 GMT
            Not After : Jul 24 10:30:02 2018 GMT
        Subject: CN = *
------ snip -----
            Authority Information Access: 
                OCSP - URI:
                CA Issuers - URI:
------ snip -----

Here we see the OCSP address for our certificate. An old client that doesn’t support OCSP will not be able to check this and should always assume the certificate is not invalidated by the authority. Clients that are up to date will use OCSP here. Since CLR requires distribution of a complete list of invalidated certificates, it is not a practical solution for Let’s Encrypt due to the sheer volume of certificates and the relatively short lifetime (1 month in the example) of the issued certificates.

If we take a look at the intermediate certificate instead, we’ll see both a CRL URL, as well as one for OCSP:

$ openssl x509 -text -in chain.pem
        Version: 3 (0x2)
        Serial Number:
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: O = Digital Signature Trust Co., CN = DST Root CA X3
            Not Before: Mar 17 16:40:46 2016 GMT
            Not After : Mar 17 16:40:46 2021 GMT
        Subject: C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3
------ snip -----
            Authority Information Access: 
                OCSP - URI:
                CA Issuers - URI:

            X509v3 Authority Key Identifier: 

            X509v3 Certificate Policies: 

            X509v3 CRL Distribution Points: 

                Full Name:
------ snip -----

So, to validate the intermediate certificate, the client will access either of the following CRL’s:


Let’s do the same using common Windows tools.

How can I determinne the CRL and OCSP URL’s (Windows)?

On Windows, we’ll use certutil.exe to dump information about our .pfx file:

PS C:\howto> certutil -v -dump .\cert.pfx
Enter PFX password:
================ Certificate 0 ================
================ Begin Nesting Level 1 ================
Element 0:
X509 Certificate:
Version: 3
Serial Number: 0a0141420000015385736a0b85eca708
------ snip ----- Flags = 0, Length = 73
    Authority Information Access
        [1]Authority Info Access
             Access Method=On-line Certificate Status Protocol (
             Alternative Name:
------ snip ----- Flags = 0, Length = 35
    CRL Distribution Points
        [1]CRL Distribution Point
             Distribution Point Name:
                  Full Name:
------ snip -----
================ Certificate 1 ================
================ Begin Nesting Level 1 ================
------ snip ----- Flags = 0, Length = 63
    Authority Information Access
        [1]Authority Info Access
             Access Method=On-line Certificate Status Protocol (
             Alternative Name:
------ snip -----

We see we get the exact same URL’s from the certutil command, that’s run on a .pfx with both the server certificate (certificate 1) and the intermediate certificate (certificate 0) in it.

We have the URL’s – Now what?

So, from the above, we know our clients (and probably our servers, too) will attempt to access the following URL’s:


Now we need to make some holes. Unless you’re planning on grabbing the external CRL’s on a regular basis, overriding your internal DNS, in this case for, and then hosting them on an internal webserver, your machines are simply going to have to access the outside through small pinholes in order to perform certificate verification.

Since all the URL’s use “http” and don’t specify a port number, we’ll be allowing traffic to port 80, the default port for http.

If you have a egress filtering proxy, transparent or opaque, and your closed environment definitely should, it should be trivial to allow access to and, which happens to be the squid notation for anything ending with those two domain names. This notation does not limit wildcard matching to the first subdomain, unlike what * does for certificates.

The less attractive alternative is to regularly resolve,, and and then allowing connections to those.

How do I know it’s working?

Your first, and perhaps easiest, clue is a web browser, if applicable. Enter the URL of your internal server in a browser and look for the usual green padlock and lack of scary warning messages, indicating that an encrypted connection was successfully established. Then consider that your browser (and Windows itself) will cache OCSP and CRL results. There are some instructions on how to clear your OCSP and CRL cache here, and I’ll include them for reference:

PS C:\howto> certutil -urlcache * delete
------ lots of spam here ------
WinHttp Cache entries deleted: 175

Now for some debugging commands. The provided ones use PowerShell, but equivalents do of course exist for Linux.

All your devices need to be able to resolve the CRL/OCSP domains:

PS C:\howto> Resolve-DnsName

Name                           Type   TTL   Section    NameHost
----                           ----   ---   -------    --------              CNAME  60    Answer

Name       :
QueryType  : A
TTL        : 60
Section    : Answer
IP4Address :

Name                   :
QueryType              : SOA
TTL                    : 60
Section                : Authority
NameAdministrator      :
SerialNumber           : 2016062800
TimeToZoneRefresh      : 300
TimeToZoneFailureRetry : 3600
TimeToExpiration       : 604800
DefaultTTL             : 60

So far, so good. We also need to be able to connect to it:

PS C:\howto> Test-NetConnection -ComputerName -Port 80

ComputerName     :
RemoteAddress    :
RemotePort       : 80
InterfaceAlias   : Wi-Fi
SourceAddress    :
TcpTestSucceeded : True

The important part above is that TcpTestSucceeded returns “True”.

Finally, let’s verify that we can actually run Verify() on our certificate, since we have it.

PS C:\howto> $cert = Get-PfxCertificate .\cert.pfx
Enter password: ****

PS C:\howto> $cert.Verify()

If this returns “False”, there’s trouble somewhere along the line.


In short, you are going to need to figure out which servers your chosen certificate authority requires that you connect to, and allow all of the involved computers, phones, and other devices to connect to these servers. If you want to avoid this, you’ll have to deal with an internal, homebrew certificate authority, and getting that installed on all your devices opens a whole new can of worms. Let’s not go there today.

My tweaks to get Kali Linux running well on the GPD Pocket

Mostly notes to myself, but hey, maybe it helps you too!

Get Kali for the GPD Pocket

I installed using re4son’s modified image from here.

Removing the GRUB splash screen (it’s sideways anyway, which looks horrible, and I’m not multibooting)



Speeding up WiFi so it doesn’t lag if you SSH into the machine, and other power management tweaks

By default, Kali on the GPD Pocket will have some weird WiFi power saving mode enabled, which means that an incoming SSH session will feel very laggy unless the GPD is constantly sending data. The result is that an SSH session feels much smoother if you’re transferring a huge file at the same time, which is rather silly. A tool called “tlp” can disable the WiFi power saving.

apt-get install -y tlp
systemctl enable tlp
vim /etc/default/tlp

Firstly, disable WiFi power saving:

Speed up the disk:

Less problems with USB devices that don't take well to being suspended:

I like to have the machine run cool even while on AC power:

Making sure the screen turns properly off and back on again when closing and opening the lid

If the laptop is not set to sleep, the screen on mine didn’t shut off when closing the lid, and if I shut it off with a script, it would randomly turn back on due to magnetic interference from the magnets on the case, or on my bag, or on whatever else. I wrote a script to take care of this, with an autostart file to go along with it:


This is public domain.
gpd-screen-watcher needs to be executable 🙂

Some additional xorg config

This is from the page about running Arch Linux on the GPD Pocket:


Section "Monitor"
    Identifier "DSI-1"
    Option "Rotate" "right"


Section "InputClass"
    Identifier "GPD trackpoint"
    MatchProduct "SINO WEALTH Gaming Keyboard"
    MatchIsPointer "on"
    Driver "libinput"
    Option "Emulate3Buttons" "True"
    Option "MiddleEmulation" "True"

EDIT: The last two lines of InputClass, Emulate3Buttons and MiddleEmulation, were added here on 2018-08-13, and allow clicking both mouse buttons to simulate a middle click, for pasting and such.


Have fun with Kali 😀

Outlook removes “extra line breaks” from plain text emails: How to stop it

“We removed extra line breaks from this message.”

Well, how helpful of you. Now the email from my crontab script is all in one line.

You can disable this “helpful functionality” permanently in Outlook by doing this:

  1. Open Outlook
  2. On the File tab, select Options
  3. In the Options window, select Mail
  4. In the Message format section, clear the “Remove extra line breaks in plain text messages” check box
  5. Click OK

That’s all fine and dandy for you, but what about your colleagues? Getting them all to do this would be a pain. Well, Outlook has this curious idea that lines ending with three or more spaces should not have this helpful behaviour applied to them. Thus, the easy solution to the problem is to add three spaces at the end of every. single. line.

It’s not as hard as it may sound. awk will do it for us.

awk '{ print $0"   " }'

This reads every single line, and re-prints it with three spaces at the end. Outlook is now happy.

You can easily pipe stuff into it for your crontab entries. The below is a useless example:

find /etc -name "*.conf" | awk '{ print $0"   " }'

You can also make your script print like this internally, which in Bash is done by redirecting stdout and stderr through an awk process, as such:

#!/usr/bin/env bash
exec > >(awk '{ print $0"   " }') 2>&1
echo "This goes to stdout"
echo "This goes to stderr" >&2
echo "More stdout"

After the exec line, anything printed by your script will be sent through awk and on through stdout. Outlook will no longer try to help.

How to deal with error 0xC03F6506 when upgrading stubborn Windows 10 Home Edition machines to Pro using a VLK

So, the machine auto-activates Windows 10 Home Edition at install time. No prompts, no nothing. You insert the generic upgrade key, and it throws an unhelpful error message back at you:

Error code 0x83FA067 means exactly the same thing, as does 0xC004F069.

This probably surprises nobody, but trying again, as the above message suggests, will not work for you.

The solution? Stuff the install media (in my case a USB stick made with the Media Creation Tool), back into the machine, then run the following command:

setup.exe /auto upgrade /pkey VK7JG-NPHTM-C97JM-9MPGT-3V66T

There’s that generic upgrade key again.

After it’s done updating, proceed as usual.

Shrinking a Windows 10 installation for small harddrives

Cheap Windows 10 capable devices often come with very limited internal storage space. To make use of this, it is of course crucial that Windows itself takes up as little space as possible. To combat the bloat, there’s a little known feature in compact.exe, the built-in disk compression tool, that compresses the operating system itself, often saving several of those valuable gigabytes.

Microsoft does have some documentation on this feature, right here.

Basically, all you do is open cmd.exe as an administrator and run:

cd %windir%\system32
Compact.exe /CompactOS:always

You’ll see something like this:

To know whether or not CompactOS is enabled on your Windows installation, issue the following commands:

cd %windir%\system32
Compact.exe /CompactOS:query

Whoa! Look at all that space, mom!