After years of training journalists and NGOs communication and operational security, after years of conducting research into the tools and protocols used, it took some more years developing a reasonable answer to most of the issues encountered during all this time.

In todays world of commercially available government malware you don't want to store your encryption keys on your easily infected computer. You want them stored on something that you could even take into a sauna or a hot-tub - maintaining continuous physical contact.

So people who care about such things use external smartcard-based crypto devices like Ubikey Neos or Nitrokeys (formerly Cryptosticks). The problems with these devices is that you have to enter PIN codes on your computer that you shouldn't trust, that they are either designed for centralized use in organizations, or they are based mostly on PGP.

Acquiring, verifying, trusting and using the correct PGP keys from your peers is also a delicate operational security dance where lots of steps can be easy to mess up. A proper device would be able to directly exchange keys with other similar devices, so that it becomes easier with much less opportunities to err. Another shortcoming of PGP is it's use of aging cryptographic primitives. An adequate device would deploy post-quantum algorithms with protocols that allow forward secrecy, peer anonymity, and other modern concepts missing from PGP.

A well-designed device must also come with a proper threat model. A threat model explains the defensive capabilities and the limits of any security device by making assumptions about the attacker. So that the user can understand how and against what a device protects, what is based on assumptions and what on proofs.

One of Snowdens revelations provided evidence for interdiction attacks, rerouting packages for backdooring hardware while it is shipping to the customer. An ideal device could be bought and assembled locally without leaving a window of opportunity for interdiction attacks. For the most paranoid (or their trusted friends) it should be possible to buy all parts in a local store and assemble such a hardware device at a local workshop. Having all designs and software available for free makes it easy to customize and extend such a device.

You also want a device that doesn't draw attention, you want something like a phone, a smartwatch or a USB stick.

You want a PITCHFORK.

I'm happy to introduce Project: PITCHFORK and to announce the publicly availability of all related sources.

Project:PITCHFORK is an attempt to produce tools to improve and research operational security of individuals and groups.

The PITCHFORK is a small USB device which is a cryptographic swiss army-knife. This is the original concept from 2013 framed as an NSA leak (which confused quite a few friends of mine back then):



Here's the official site:

https://pitchfork.ist, the wiki and most importantly all the related git repos.

If you're into embedded or crypto development the PITCHFORK is a serious device that contains a lot of fun. Cool people from the TU/Nijmegen regularly dump out nice crypto code that is optimized for the Cortex-M series. The PITCHFORK serves three goals, to protect our keys, and to provide a platform for building and breaking crypto on embedded platforms.

A bit of history

Development started in 2013, with the experimental PGP replacement PBP, by trying to run curve25519 operations on a r0ket, and the now quite popular libsodium wrapper pysodium. In late 2013 I got my development board, an Open207 from waveshare and had the first USB storage controller firmware and initial PITCHFORK firmware ready.

In May 2014 I started pyrsp, a tool to make development easier by allowing python scripting directly to control the cpu over Serial Wire Debug (SWD) protocol, which is similar to JTAG but uses less wires. A talk about pyrsp became a hit even on hackaday. A bit later I figured out how easy it is to look for PGP encrypted messages, a variant of that even made it into the file(1) magic signatures. In parallel to pyrsp I started to design the board, inspirations were the original r0ket and the bitcoin trezor project.

The first boards arrived early 2015, but until early summer work was suspended, the first bugs were identified and more during our camp++ where I also gave a talk on the progress (or lack thereof).

Work was suspended until beginning 2016, when a 2nd batch of boards was ordered with all bugs fixed - and a few new ones. Lots of work was done on the HW and the firmware in the first half of 2016, also a Nokia 3310 version was designed and ordered. At the camp++ in 2016 I gave another talk. And we also started to work on the Reflowmaster2000plus Deluxe Pro - a reflow-oven - so that you can indeed bake your own PITCHFORKS at home in your toaster. A first closed beta was run with 15 PITCHFORKS given to contributors.

I'm currently looking to find a good manufacturer - which also does design and produces rugged/waterproof/shielded cases. When I find one finally there'll be a crowdfunding campaign where you can acquire working pitchforks as perks and where you can sponsor future research and development of Project:PITCHFORK.

I must say it was truly an exciting project so far, crypto, low-level HW stuff, assembly, on a platform that reminds me of computing capacities of my early years. Lots of fun and learning. And lots of help from good friends - especially from the Hungarian Autonomous Center for Knowledge, the NLnet and the Renewable Freedom foundations, without their contributions this project would be stuck, and probably forgotten at the bottom of a todo list.

generating pgp ids


A proper fingerprint from Wikipedia The tool I release today is genkeyid part of my gpk PGP key management suite, which is a tool that helps you bruteforce arbitrary PGP key ids by modifying the timestamp field of public keys so that the packet hashes to a given key id.

I also release setfp.py which allows you to set arbitrary timestamps in PGP RSA key pairs and recalculates the RSA signature accordingly. You might want to combine this with the other already previously released genkey tools.

The two steps are separated, because the bruteforcing does only need a public key, but setfp also needs an unencrypted private key. So if you want to have a special key id, but also maintain Key Management Opsec, you should do the patching offline in a clean system that you discard later.

For the truly ignorant and the ones having extra clean systems and lots of entropy available in bulk, there's genid.sh which does the two steps in one, generating as many unencrypted keypairs as necessary until a suitable is found.

Of course this is nothing new, there are existing examples of manipulated key ids. Some people have issues with the ambiguity of key ids, but one of the authors of PGP says this is ok. The PGP FAQ has more on this.

get it from github/stef/gpk

Or read more: README.genkeyid.org

Announcing pwd.sh


postits as password managers I wanted to switch to KeepassX to store all my passwords, but I wanted to use GPG to encrypt the passwords. So I came up with pwd.sh. It's a simple shell script that you can bind to your window manager keybindings, and when you invoke it, it uses the current focused window to deduce a key to store the user and the password. For better browsers like Firefox, Chromium, luakit and uzbl this means the currently loaded URLs, for all other windows the current window title. When creating a new password, it is automatically generated only the username is queried. I also wrote a small script that imports all passwords from Firefox into the new format. I'm very happy that now all my passwords isolated from my browsers and they are also protected by my PGP key on my external cryptostick.

When I showed this yesterday in our hackerspace, 2 members immediately installed and started massively improving pwd.sh, thanks asciimoo + potato!

So if you're running linux, like stuff based on the KISS principle, and are a crypto/gpg fetishist you might want to consider trying out this new "keepassx niche-killer" ;)

Check it out: pwd.sh



certificate in firefox I just released tlsauth, a lightweight implementation of a CA and supporting scripts and config snippets that should make TLS client certificate-based authentication a bit easier to set up. The current implementation works in nginx (if someone knows how to do this in Apache, please contribute).

I also provide Flask-tlsauth and Django-tlsauth bindings, available also on pypi. Both contain simple web-based Certificate Authority functions, like sending in CSRs, listing and signing them, and even something similar to regular user registration. With the only difference, that when you are finished registering you have to import the certificate.

So when you look at this from a traditional PKI perspective something is fishy. User registration, and I get a cert back? Wait a minute, shouldn't the CSR be submitted by the user in the first place? Yes. But. :) Considering this from a traditional user registration workflow, the user usually trusts the server with his secret, the password. With TLSAuth however the server drops the secret after creating it and sending it to the user. So with most users blindly trusting their service providers I assume they'll trust them also diligently dropping them. The certs are not very good for anything else than log in to the server. And the CA can produce certs as many as he wants anyway.

Why is this good?

No more passwords

Your users win, because now they only need a password for importing the key into their browser, and then it is protected by the browser master key. This also prohibits users to reuse the same passwords on unrelated sites.

You can also copy your key around and load it on different devices, if you want to be able to access the services also from them, but this only needs to be done once in each browser.

This means also automatic authentication on all services sharing the issuing CA with the clients issuer. This means you can log in to all services on various servers certified by your issuing CA.

With appropriate security tokens you can even store your keys on smartcards and keep you certificates safe from your browser.

No more user databases!

Server operators win because they do not need to store a user database! This removes all kind of privacy issues, and reduces the costs of database leaks considerably.

Your users always send their their TLS cert, which is signed by the CA - you. So when someone comes and says: "hey i'm Joe, here's a certificate about that from you", then you can be sure about it. ;) Also a cert can contain more information, like an email address, or even an real life address for shipping, etc. You decide when you sign your users certificates what your require them to contain.

Authentication on TLS level

You know your client before it even says "GET / HTTP/1.1". This means you can redirect your handler accordingly, showing static only content for unauthenticated visitors, full dynamic server-side scripting and security bugs for trusted peers, and maybe even IMAP or SSH for certain certificates. ;)

Why is this bad?

Bad Browser UIs

On the user side log out is kinda impossible currently. But there seems to be a key-manager for stock firefox - iceweasel is not supported :/ - that could be helpful with log out and other key management related tasks.

It would be nice if the vendors would put more effort behind improving their related user interfaces instead of slacking or reinventing existing protocols.

Losing your phone/tablet/laptop

Losing HW is always a bad thing, especially when you have your certificates on it, hopefully they are protected by a master password in the browser, and full disk encryption on the hard drive. But this should be standard anyway.

Deleting users

CRL or OCSP (and OCSP Stapling already supported in nginx) are the normal way to do this. The question is how to keep track of the serial numbers without exposing the privacy of the end users by keeping server-side database.

Protecting your own CA root key

This is something that kinda makes the operator the weakest link in the whole setup. If anyone has access to your CA signing key, they can MITM attack any connections of all browsers that trust this CA. So you should apply utmost key management security with air gaping and possibly use some kind of cheap HSM like a smartcard or even better.

Loose ends

I understand that TLSAuth does not solve all problems. But for small groups or projects TLSAuth might make a lot of sense. It's perfect for protecting a phpmyadmin from all the probes on the internet, and still make it available to the admins, or you can run your own webmail for all your family and not care about the web as an attack vector.

There's a few open questions and loose ends to be explored here. But I'm quite hopeful to use TLSAuth in future projects, maybe even Parltrack.

Possible Parltrack features


I've been maintaining a list of possible features for Parltrack if the funding campaign hits 10.000 EUR, I'd be interested to hear feedback and other suggestions to this list:

Monitor by subjects

Parltrack already provides listings by subjects (e.g. Protection of privacy and data protection) but there's neither a possibility to subscribe to any changes or new dossiers to these listings. Also missing is currently a user interface where users can browse and select all existing subjects. This feature would allow for broad tracking of policy areas instead of the currently supported dossier-by-dossier tracking.

Monitor by search phrase

Simply enter a search phrase and your email and get notified, if any dossier appears or changes that contains this phrase in its title.

Subscription management

A user-interface to better manage your subscriptions to things you're monitoring.

Visitor Trends

Display any trending dossiers or MEPs based on the visitor access statistics. This way you can identify what or who is currently hot in the EP.

Amendments from the 6th term

Adding also the amendments from the 6th parliamentary term between 2004 and 2009, different formats require the tuning of the scrapers to handle also these earlier documents.

Historical view

The preservation of historical data allows to present also snapshots from previous points in time. A nice timeline visualization is also imaginable.

Localized Parltrack data

Parltrack currently only scrapes in English, some information is easily scrapable also in the rest of the 22 European languages. Some might be harder, but for NGOs it would definitely make a difference, having this information also in their native language - especially if we're talking about re-users of the liberated datasets.

Commenting on dossiers and MEPs

Last but not least a feature that I have been long contemplating. It would be nice to somehow merge Pippi Longstrings, Herr Nilsson and Parltrack into a useful bundle, creating a possibility to comment on the legislative proposals and their procedural meta-information in one location. The issue with this is, that a public service like this needs a lot of moderation, and I fear that serious NGOs would not want to trust their internal political insights and commentary with an untrusted 3rd party like Parltrack. This feature is also the basis for the 750 EUR perk in the campaign by the way ;)


So this would be an initial list of medium to big features to be added, in addition to the site redesign and various small improvements that come up in the mean time, with possible other yet unplanned features to be added to this list. I expect this to occupy me for about a year especially if we reach funding levels that allow me to add new data sources as well.

There is also continued cooperation with NGOs reusing the Parltrack database, like with La Quadrature Du Nets awesome Political Memory and the just recently started Lobbyplag initiative which wants to expand its operations beyond the Data Protection dossiers.

If you agree with all or some of these goals, please consider supporting the current fundraising campaign by donating and making other people aware of this initiative. If you feel some important thing is missing let's talk about it, information and financial feedback are both important for the future of Parltrack, thank you.



EP - ACTA vote About two years ago Parltrack started as another tool trying to get some information that was necessary at that time. Since then the amount and quality of data in Parltrack has come a long way. One year ago, I had to rewrite all the scrapers as the European Parliament upgraded their website. A couple of related tools have been developed, for example Herr Nilsson or - the most widely-known - Political Memory or memopol as we call it. Also ACTA has been defeated. I believe Parltrack contributed a small part to this success. Having recent and good data on the ground was essential for campaigning in and around the European Parliament.

I think Parltrack is a tool with lots of potential. I'd really like to find some more time to just data-mine Parltrack, which was one of my initial motivations when I started this project. As a good friend used to say: most of our work in the commons is financed by pre-accumulated wealth from the traditional system. The peculiar nature of this open data combined with free software makes it somewhat difficult to keep this project sustainable. I've tried Flattr, debated and rejected advertising, offered consulting/custom development jobs, and turns out i'm too small to be eligible for EU funding grants. Depleting resources resulted in a shift of my attention lately to other jobs, however Parltrack seems to be used quite a lot. The lack of maintenance already started showing, so to stop this degradation and to allow me to focus more on Parltrack in the coming year I started an Indiegogo campaign. If you care about freedom, datalove, kittens, puppies, or just me, go here and support this campaign. It will allow me to build more free infrastructure.

thanks, s

Thank you to all my friends who helped me setting up this campaign.

ps: for Parltrack related news you can follow @Parltrack, and RSS updates

PGP key generation


With the usage of PGP in everyday life our communication is mostly state of the art, and quite expensive to compromise. The weakest links are nowadays the systems where the communication terminates and is decrypted to plaintext. Not only are the messages available unencrypted, but also the encryption keys. Proper key management becomes essential, however diligent key management is something that not even the German Wehrmacht was always able to do properly. :) To reduce the probability of errors, there's a script at the end that automatizes most steps.

One essential aspect of key life-cycle management is key generation.

Note: most of the procedure below can be substituted by using an OpenPGP smartcards which allow to generate keys, that cannot be extracted easily, all signing and decryption happens in the smartcard itself. Such smartcards however usually have certain storage limits. Current technology allows usually 3 keys with 3072 bits, some newer models also 4096.

Generating a new key

Needed things:

  • a secure offline environment for key generation,
  • secure offline location to store the signing key,
  • Another secure offline location to store a backup of the signing key,
  • A third secure offline location to store a revocation certificate,
  • A pristine offline system for generation and handling of the key,
  • 3 distinct and strong passphrases

Offline system for key management

The biggest threat to key generation is a trojan/malware compromised system that leaks not only the keys but also captures the keystrokes of the passwords. To counter this threat it is strongly advised to use a pristine live CD to boot into an offline environment (yes, disconnecting the network cable is a good idea anyway). I like Tails for such a live system, but Privatix or Liberté Linux might be similarly useful.

Most PGP keys consist usually per default of a signing and an encryption key. The web of trust is woven by signing other peoples signing key. However there's a trade-off, either the key has a limited lifetime, and we have to ask our peers to re-sign the new key from time to time. Alternatively you create an unlimited signing key, but then handling becomes difficult as you want to protect this key with increased diligence. Multiple sources online suggest creating a master signing key, which is only used (again offline - see a pattern here?) for signing other keys and at least two subkeys, one for signing anything else and one for encryption. This allows you to regularly update your sub-keys without the need to re-sign them with your peers, only your master key needs to sign the new sub-keys. Regenerating your sub-keys regularly also reduces the lack of perfect forward secrecy in PGP.

All three keys should be protected by strong passphrases. As signing and encryption are not necessarily happening with the same frequency and applications, so it makes sense to set different passphrases for both. That means 3 passphrases in total, sounds hard to remember. Instead of using a long random password, rather use a passphrase consisting of at least 5 words. Now instead of 14 random letters you have to remember only five words, that should be manageable - especially if you make up a small story out of the words. One way to generate such passphrases is the Diceware method, another way is this simple script which replaces dices with openssls rand and the word list can be any size that fits in memory - this can be seeded with any kind of word list that you trust.

When you generate a key, it is good practice to also generate a key revocation certificate for the case your key gets compromised. In cases where you might not have the key available anymore to generate such a certificate having one ready up-front can prove useful. For cases when you lose or destroy the encrypted container containing the private keys it is also useful to have a backup ready - after all it took a lot of entropy to generate the key, don't waste it ;).

Both the revocation certificate and the backup are hopefully very rarely needed, but should be well protected. You can choose to use passphrases and encrypted containers, or you can encrypt the cert/backup with a 128 byte cryptographically strong random key and use Shamirs Secret Sharing Scheme to split up the encryption key into multiple parts and distribute them geographically and perhaps to trusted persons. Only when a preset amount of shares is presented, can the backup/cert be accessed. You could also generate a third set of shares for your backup, in case something happens to you and you want your family, friends or lawyers be able to read encrypted data belonging to a certain private key...

As you should very rarely need to generate such a full key and it is quite a complex procedure, there's a script that tries to automatize all the steps above. It depends on gpg, gpgsplit, srm, openssl and ssss, of which i think ssss might be necessary to install manually on tails. The script generates all interim material into /run/shm so that no trace on storage media is left, you have to move the various pieces yourself to their final location, like importing the subkeys into your keyring, distributing the shares for the backup and the revocation cert and the master signing key and it's backup copy. I will try to cover the storage of keys on dedicated USB sticks in a later post. I hope you enjoy your new pimped keys (oh and by the way nothing prevents you from having more than only two subkeys).

Comments and improvements are welcome.

amendments in parltrack


Here's a sneak preview of an upcoming parltrack feature:


The data is possibly not complete, but gives good additional information. If everything goes well this will be integrated into MEP and dossier views. Until then you can change the dossier id in the url above, or replace it with a name of a mep:


Some stats on the data,

  • total number of amendments in the 7th term so far: 168917,
  • amended dossiers: 976,
  • amending MEPs: 775.

top 3 MEPS:

  1. Olle SCHMIDT: 2038
  2. Philippe LAMBERTS: 1974
  3. Silvia-Adriana ŢICĂU: 1610

top3 amended dossiers:

  1. 3075: Structural instruments: common provisions for ERDF, ESF, Cohesion Fund, EAFRD and EMFF; general provisions applicable to ERDF, ESF and Cohesion Fund (2011/0276(COD))
  2. 2482: Common Fisheries Policy (2011/0195(COD))
  3. 2310: Public procurement (2011/0438(COD))

If anyone wants to play with the raw data:


And to see what data might be missing:


Tunnel daemons


molerat Various methods of tunneling ssh connections to pierce through restrictive firewalls. The following setups are evaluated:

  • HTTPTunnel+stunnel4, moderately difficult to setup, but once installed it appears as legitimate HTTPS traffic.
  • Iodine, setup needs the most effort, however when done and the network allows DNS queries it works quite reliably.
  • CurveCP, setup is quite easy, when done the link is encrypted and fast. However the firewalls that allow UDP/53 to pass are somewhat limited.
  • ICMPTX, setup is quite easy, however there is no encryption, use it only to tunnel encrypted traffic like ssh and such.
  • TOR, setup is easy, usage is a bit delayed due to the latency of the Tor network, requests look like normal HTTPS traffic.

The setup with the most effort seems to be also the most reliable, an iodine based link over DNS can break out of a lot of networks. If we can use HTTP to browse but other services are restricted then the httptunnel is adequate. For less setup-hassle but increased latency Tor tunnels also deliver reliably. The usefulness of ICMP and CurveCP tunnels depend on the firewall configuration, but if they work, they're pretty fast.


This method is generally useful in heavily restricted networks, where you can only use the web for browsing, but no other services are allowed.

We use the fine tool httptunnel for masking our ssh connection. However httptunnel is not encrypted, and thus also the ssh handshake can be identified in the traffic. To avoid that, we put a tunnel into our tunnel using stunnel.

On the server

First we need to generate the SSL certificate:

openssl req -new -x509 -days 365 -nodes \
       -out htcert.pem -keyout htcert.pem

Set up an stunnel, make sure to set the ip address, and the user and group exist:

/usr/bin/stunnel -f -r \
     -d <public ip address>:443 \
     -p htcert.pem -s stunnel4 \
     -g stunnel4 -P ''

When this is done, we can run httptunnel to connect the sshd with the stunnel:

/usr/bin/hts -w -F

On the client

Get the generated certificate from the server (don't forget to remove the private key part). You need to rename the cert to it's hash value and append a '.0':

mv htcert.pem $(openssl x509 -noout -hash -in htcert.pem).0

Now start the stunnel:

sudo stunnel -c -d \
       -r <server-address>:443 \
       -s stunnel4 -g stunnel4 -P '' -a . -v 3

We need to set the server address (can be IP or name-based), make sure the user, group exist.

start the httptunnel:

htc -F <httptunnel-port>

The tunnel will listen on httptunnel-port. Enjoy your ssh-over-https:

ssh -p <httptunnel-port>

Over DNS

In some cases internet access is blocked however DNS traffic is allowed to pass, allowing us to tunnel through DNS.

If you can setup a special DNS entry for this, tunneling through DNS is very easy using the excellent iodine tool. Follow the straight-forward installation instructions.

Use this method if the network allows resolving of names, even if a local DNS server is forced on us, the tunnel will still work due to recursive queries hitting your "authoritative server".

Hint: you can manage and delegate a DNS zone for free on affraid.org, if you don't have your own.

Using CurveCP on UDP/53

The drawbacks of using any DNS protocol based tunnel like iodine, are that the tunnel has size-wise a huge protocol overhead, you need to setup a slightly uncommon DNS configuration and domain names you control are usually registered on your real name. If the firewall does not force the usage of a local DNS server and it allows traffic to UDP/53, then CurveCP tunnel is a preferred option.

Alternatively you could also run on UDP/80 or other allowed UDP ports.

On the server

Note: During testing I had to recompile CurveCP as the address family was missing from the bind call, see the patch at the end of this post.

Create a server key:

curvecpmakekey serverkey

convert the key to hex, and store it on the client in serverkey.hex:

curvecpprintkey serverkey > serverkey.hex

run the curvecpserver:

curvecpserver <your host name> \
                serverkey \
                <your ip address> \
                53 \
                00000000000000000000000000000000 \
                curvecpmessage /usr/sbin/sshd -i

On the client

This depends on socat the excellent swiss-army knife of socket handling.

Store the serverkey.hex that you generated on the server and run the client:

curvecpclient <curvecpserver hostname> \
    $(cat serverkey.hex) \
    <curvecpserver ip address> \
    53 \
    00000000000000000000000000000000 \
    curvecpmessage \
    -C sh -c "/usr/bin/socat tcp4-listen:9999,bind=,reuseaddr,fork - <&6 >&7"

Start your ssh-over-curvecp:

ssh -p 9999


Using ICMPTX you can set up tun devices that tunnel over ICMP, which is quite handy as in some cases it's not filtered and allows to pierce through the blockades. ICMPTX creates a local network device, so tunneling anything is quite easy after setup.

On the server

Simply run

(sleep 1; ifconfig tun0 netmask )&; icmptx -s <server ip address>

On the client

Simply run

(sleep 1; ifconfig tun0 netmask )&; icmptx -c <server ip address>

sshing to your box is then a simple:


Using Tor

Tor is great for hiding traffic, it's latency is a bit bigger than usual, but it's quite possible to get work done through tor tunnels even with ssh. If you configure your client-side tor proxy to use a tor bridge that runs on port 443, then the tunnel looks like casual HTTPS traffic.

There's two options, you can connect from a tor exit node to your normal ssh server, in this case skip the "On the server" part, and use your normal hostname instead of the .onion address referenced there.

On the server

If you want to run your ssh tunnel as a tor hidden service you simply have to add the following two lines

HiddenServiceDir /var/lib/tor/sshtun/
HiddenServicePort 22

to your /etc/tor/torrc. And find out the hostname of your new hidden service with:

cat /var/lib/tor/sshtun/

On the client

You simply need to call the torified ssh:

torify ssh <.onion hostname from server>


Stubs for running the server-side daemons using the excellent runit tool can be found on github. These can be most easily installed using deamonize.sh. For client-side setup use the instructions in this post.

curvecp patch

curvecpserver had to be patched, as the address family in the bind call was uninitialized, the patch is below:

diff -urw nacl-20110221/curvecp/socket_bind.c nacl-20110221-new/curvecp/socket_bind.c
--- nacl-20110221/curvecp/socket_bind.c 2011-02-21 02:49:34.000000000 +0100
+++ nacl-20110221-new/curvecp/socket_bind.c     2012-08-19 02:52:25.000000000 +0200
@@ -9,6 +9,7 @@
   struct sockaddr_in sa;
   byte_zero(&sa,sizeof sa);
+  sa.sin_family = AF_INET;
   return bind(fd,(struct sockaddr *) &sa,sizeof sa);

pippi matures


New filtering interface for pippi I wanted to pippi CETA with ACTA, some other FTAs (Korea, Cariforum) and some other docs, but found it difficult to do so. So during the last days I revamped pippi a bit.

The results are a new browsing interface, where you can directly start pippifications of documents. Clicking there on the "Pippi ★" button uses the currently selected document and compares it with all shown starred documents. You can quickly filter all documents based on their title (this search uses powerful regular expressions), you can filter on your own documents (more on that later) or starred documents. Later is useful for running a pippi against a greater selection of reference documents.

Another new feature is that it is encouraged to be logged in when creating documents, this allows you to later edit the title of the document and to delete it if it is not yet pippied against other documents. Being a creator of a documents allows you also to access these documents more easily by using a filter for your own collection when browsing documents.

So pippification of CETA against all those other documents was easy:

  1. I created all the documents (e.g. i copy/pasted ACTA from Oct 2011 from http://www.euwiki.org/ACTA/Tokyo_oct2, and used a bunch of CELEX ids for documents available on eur-lex.)
  2. I went to "browse" and filtered on my own collection,
  3. I starred all the relevant documents for pippification,
  4. I selected from this list CETA, so that it is displayed in the Details
  5. I hit "Pippi ★" and after some delay I got the pippied results presented.

The result looks like this: http://pippi.euwiki.org/doc/ceta_ipr_2012feb

Hint: if you enter ACTA in "Filter by tag" in the top bar, then it hides the copies from the other documents...

Announcing Herr Nilsson


Herr Nilsson is a bot which fetches data from parltrack and imports it into a mediawiki. This helps to improve the stubs on euwiki itself.

This also allows other organizations to run their own internal wiki, which can contain private analysis and commentary. A much requested feature. All you need is a mediawiki and Herr Nilsson is setting up stubs for the dossiers of interest.

On parltrack there's now a Preferences menu in the top blue bar, where you can set the address to your mediawiki, and parltrack will automatically display a Notes link in the top blue bar, which links to the dossiers page on your own hosted wiki.

pippi intl


Good news everyone, i just enabled hu, da, de, es, fi, fr, it, nl, pt, ru, se language support in pippi.



Clicktracking is evil, dnet and endre specified the details of an anonymous URL unshortening service (UUS), anonshort. Basically it resolves HTTP and HTML meta redirects, and cleans out those annoying Urchin Tracker Module URL parameters.

We currently don't provide a web user interface, only a very slim web API. Simply construct an URL by appending the shortened URL to http://anonshort.hsbp.org:8080/?u= and get a resolved URL back. Easiest is in the command line with curl:

curl 'http://anonshort.hsbp.org:8080/?u=<URL>'

the same service is also available for even more privacy as a tor hidden service at:


using this tor hidden service is similarly easy with curl and torify:

torify curl 'http://ixzr427vwpmxk3io.onion/?u=<URL>'

We do cache the results, but we do so in a way that prohibits even us to deduce the input and output URLs without knowing the input URL. The algo is quite nifty i hope it stands up to scrutiny (check out cache.py).

All resolves are running over tor, the user agents are chosen from a pre-selected set of samples taken in the wild.

There was talk of a web-interface as well, let's see how that evolves.

enjoy it!



I experiment with omnom to use it also as a platform to announce updates to itself, check out the omnom-announcement tag or point your rss reader at the atom feed.

The latest message in a nutshell: for anyone who uses the userscript for bookmarking should update.

widget for omnom


Good news everyone! Omnom - my feeble attempt at creating a proper^Wlibre delicious replacement - now has gotten "widget" functionality. I took the original delicious widget and shamelessly adopted it. You can see the result in the right bar under "/dev/read".

If you are one of the lucky omnom users, you can use the code below, just change the 2 links pointing to my collection to your own.

<h3><a href='http://links.ctrlc.hu/u/stf'>/dev/read</a></h3><div id="omnom-box" style="margin:0;padding:0;border:none;"> </div>
<script type="text/javascript" src="http://links.ctrlc.hu/u/stf/?format=json&j"></script>
<script type="text/javascript">
   var ul = document.createElement('ul');
   for (var i=0, post; post = omnom_posts[i]; i++) {
      var li = document.createElement('li');
      var a = document.createElement('a');
      a.setAttribute('href', post.url);
   ul.setAttribute('id', 'omnom-list');

< prev posts

Proudly powered by Utterson