After years of training journalists and NGOs communication and operational security, after years of conducting research into the tools and protocols used, it took some more years developing a reasonable answer to most of the issues encountered during all this time.

In todays world of commercially available government malware you don't want to store your encryption keys on your easily infected computer. You want them stored on something that you could even take into a sauna or a hot-tub - maintaining continuous physical contact.

So people who care about such things use external smartcard-based crypto devices like Ubikey Neos or Nitrokeys (formerly Cryptosticks). The problems with these devices is that you have to enter PIN codes on your computer that you shouldn't trust, that they are either designed for centralized use in organizations, or they are based mostly on PGP.

Acquiring, verifying, trusting and using the correct PGP keys from your peers is also a delicate operational security dance where lots of steps can be easy to mess up. A proper device would be able to directly exchange keys with other similar devices, so that it becomes easier with much less opportunities to err. Another shortcoming of PGP is it's use of aging cryptographic primitives. An adequate device would deploy post-quantum algorithms with protocols that allow forward secrecy, peer anonymity, and other modern concepts missing from PGP.

A well-designed device must also come with a proper threat model. A threat model explains the defensive capabilities and the limits of any security device by making assumptions about the attacker. So that the user can understand how and against what a device protects, what is based on assumptions and what on proofs.

One of Snowdens revelations provided evidence for interdiction attacks, rerouting packages for backdooring hardware while it is shipping to the customer. An ideal device could be bought and assembled locally without leaving a window of opportunity for interdiction attacks. For the most paranoid (or their trusted friends) it should be possible to buy all parts in a local store and assemble such a hardware device at a local workshop. Having all designs and software available for free makes it easy to customize and extend such a device.

You also want a device that doesn't draw attention, you want something like a phone, a smartwatch or a USB stick.

You want a PITCHFORK.

I'm happy to introduce Project: PITCHFORK and to announce the publicly availability of all related sources.

Project:PITCHFORK is an attempt to produce tools to improve and research operational security of individuals and groups.

The PITCHFORK is a small USB device which is a cryptographic swiss army-knife. This is the original concept from 2013 framed as an NSA leak (which confused quite a few friends of mine back then):



Here's the official site:

https://pitchfork.ist, the wiki and most importantly all the related git repos.

If you're into embedded or crypto development the PITCHFORK is a serious device that contains a lot of fun. Cool people from the TU/Nijmegen regularly dump out nice crypto code that is optimized for the Cortex-M series. The PITCHFORK serves three goals, to protect our keys, and to provide a platform for building and breaking crypto on embedded platforms.

A bit of history

Development started in 2013, with the experimental PGP replacement PBP, by trying to run curve25519 operations on a r0ket, and the now quite popular libsodium wrapper pysodium. In late 2013 I got my development board, an Open207 from waveshare and had the first USB storage controller firmware and initial PITCHFORK firmware ready.

In May 2014 I started pyrsp, a tool to make development easier by allowing python scripting directly to control the cpu over Serial Wire Debug (SWD) protocol, which is similar to JTAG but uses less wires. A talk about pyrsp became a hit even on hackaday. A bit later I figured out how easy it is to look for PGP encrypted messages, a variant of that even made it into the file(1) magic signatures. In parallel to pyrsp I started to design the board, inspirations were the original r0ket and the bitcoin trezor project.

The first boards arrived early 2015, but until early summer work was suspended, the first bugs were identified and more during our camp++ where I also gave a talk on the progress (or lack thereof).

Work was suspended until beginning 2016, when a 2nd batch of boards was ordered with all bugs fixed - and a few new ones. Lots of work was done on the HW and the firmware in the first half of 2016, also a Nokia 3310 version was designed and ordered. At the camp++ in 2016 I gave another talk. And we also started to work on the Reflowmaster2000plus Deluxe Pro - a reflow-oven - so that you can indeed bake your own PITCHFORKS at home in your toaster. A first closed beta was run with 15 PITCHFORKS given to contributors.

I'm currently looking to find a good manufacturer - which also does design and produces rugged/waterproof/shielded cases. When I find one finally there'll be a crowdfunding campaign where you can acquire working pitchforks as perks and where you can sponsor future research and development of Project:PITCHFORK.

I must say it was truly an exciting project so far, crypto, low-level HW stuff, assembly, on a platform that reminds me of computing capacities of my early years. Lots of fun and learning. And lots of help from good friends - especially from the Hungarian Autonomous Center for Knowledge, the NLnet and the Renewable Freedom foundations, without their contributions this project would be stuck, and probably forgotten at the bottom of a todo list.

on pgp



First and foremost I have to pay respect to PGP, it was an important weapon in the first cryptowar. It has helped many whistleblowers and dissidents. It is software with quite interesting history, if all the cryptograms could tell... PGP is also deeply misunderstood, it is a highly successful political tool. It was essential in getting crypto out to the people. In my view PGP is not dead, it's just old and misunderstood and needs to be retired in honor.

However the world has changed from the internet happy times of the '90s, from a passive adversary to many active ones - with cheap commercially available malware as turn-key-solutions, intrusive apps, malware, NSLs, gag orders, etc.

Archive & Compromise

Today it is cheap for a random spy agency to archive all encrypted messages for later decryption - if necessary. A few years ago in the spy files of Wikileaks there was Finfly ISP (PDF) - a proxy that infected binaries during download at the ISP. Since then the hacking team leak got us an in-depth look at who is buying this kind of mass spy gear. While Data Retention is repealed in the EU by the court in Strasbourg, many countries still practice this, in addition to taps by domestic intelligence agencies which can easily filter out PGP messages. Deploying some malware on persons of interests to recover their secret keys and the password is a cheap operation that can be executed even after minimal training.


What discussion about PGP's obsoleteness lacks, is something that is very much required in cryptographic discourse: the adversary model, a set of actions the adversary can perform. Those cryptographic adversary models however, might be a bit too deep mathematics for many end-users, for them I came up with the quite populist 4c model, there's only four generic adversaries classes:

  • Citizens
  • Criminals
  • Corporations
  • Country-level actors

Is PGP a reasonable tool to protect against other citizens? Probably yes, unless your kid or wife's PI installs a remote access trojan (aka is an active adversary). Is it good against criminals? Probably, but only because it's not economical for criminals to extract value from your cryptograms. Does it protect against corporations? Probably as long as they stay within the law and don't siphon down everything they find anyway (i.e. smartphone apps). Does it protect against country level actors? Most probably not.

Unsuitable models

Consider your average investigative journalist or whistleblower, with windows or a mac, that they haven't updated because then their kids favorite game doesn't run anymore or they simply don't want windows 10. An encrypted message archiving adversary is able to read your mails using a simple active malware attack, copying your secret key and logging your password for it. After this is captured, the malware can and should remove itself.

In "first" world countries like France where there's now a "state of emergency" or the UK with their snoopers charter or the dutch who just passed another dystopian dragnet surveillance bill, this directly affects climate activists as much as labor unions or journalists. The case is probably even worse in Turkey or any of the Eastern Bloc states. This makes forward secrecy a mandatory requirement, as this implies that the malware has to be constantly active and thus also enhances chances of detection and mitigation, and also requires much better trained personal to operate.

Suitable models

Here's a good example of using PGP: a doctor and his patients could use PGP to secure their communication, it seems most pharmaceutical companies are still shying away from getting patient data by hacking into doctors offices (although I expect that to be quite easy, and I ignore here software developed by pharma for doctors offices). The law seems like a reasonable and natural defense-in-depth in this case. However metadata that you sent a mail to a doctor might still be interesting to your insurance company regardless of contents of the mail.

Another good example is signatures, they have not aged as badly as encryption. It's mostly ok to use PGP to sign software packages, git commits, SSL certs and even contracts, but even in those cases it is worth taking extra care with your keys if you sign something that can be used as an attack vector like some software source code or a TLS certificate.


Let's say the adversary is only passive, it still can learn a lot about you, there are actors who "kill based on metadata". The often heard defense that TLS encrypts SMTP anyway, ignores a reality where self-signed certificates are very common and mail servers are configured in a tolerant way, to let other legitimate users with badly configured (e.g. earthlink), self-signed or mitmed boxes still send and receive mails.

Archiving - data-at-rest not in-motion

The argument that forward secrecy (FS) gets in the way of reading old emails neglects the possibility to store the messages in a different way. And even without FS reading old emails can fail, some examples how PGP similarly fails without FS:

  • you will lose your ability to read your mails when you loose your key.
  • When using manual FS using quickly rotating keys, you still can not read your old emails.
  • HW tokens can break and then again, you will have no access to the mails.

I firmly believe that the archived mails should be re-encrypted with something more appropriate than the cipher the sender choose - and riseup seems to agree. A solution would be something like tahoe-lafs, or some double scheme, with backup key being split into shares. And a possibility to decrypt a message without compromising the confidentiality of the other messages. Additionally all archived mails should also be privately indexable/searchable/retrievable. This might be an interesting research project if anyone is looking for something to chew on (add it to tom ritters wishlist if you want).

App ≠ Procotol

Another issue is that secure messaging is nowadays equated with PGP encrypted email or using Signal on a smartphone. The most important is that Signal (the app) is not equal the Signal Protocol. Just as much as I don't recommend using PGP in many cases, I also do not recommend using a centralized service that keeps your keys on a smartphone. However I warmly recommend using the Signal Protocol whenever messaging is to be done. Signal can be a direct replacement for PGP someone just has to code up the whole thing (time to flesh out signal-cli).

Other tools

Indeed there are other tools developed that are quite promising, opmsg is the most advanced and mature of the ones I looked at, and I would recommend opmsg any day over basic gpg. Conceptually another interesting approach is codecrypt which uses post-quantum algorithms for signing and encryption, pond introduces a whole bunch of new concepts for messaging, coniks addresses many aspects of the key management issues that pgp neglects, safeslinger is also an exciting protocol - if you ignore that it runs on a smartphone.

Further reading

The essential paper, if you are into securing messaging, which sums up most of the issues mentioned in this post and introduces examples of how to fix these and many more, is: "SoK: Secure Messaging. By N. Unger, S. Dechand, J. Bonneau, S. Fahl, H. Perl, I. Goldberg, M. Smith". This paper gives a granular analysis of various aspects of messaging and shows which tools or protocols go beyond what PGP is providing as a baseline. Listing all of those aspects is a bit too much, but to whet your appetite, the main four aspects and their categories:

  • key-exchange: security, usability, adoption
  • conversation: security, deniability, usability
  • transport: privacy, usability, adoption
  • group conversations

The tables in this post are taken from the paper.


PGP for encryption as in RFC4880 should be retired, some sunk-cost-biases to be coped with, but we all should rejoice that the last 3-4 years had so much innovation in this field, that rfc4880 is being rewritten with many of the above in mind and that hopefully there'll be more and better tools. After all, it's an arms race, not trench warfare.

About the Author

stef has done lots of research on PGP in the past: GitHub repos (1, 2, 3, 4, 5) and blog posts (1, 2, 3, 4) in the archives. stef has for many years advocated and used PGP himself, he trained journalists, NGOs, activists - on his own, and in cooperation with organizations like Tactical Technology Collective. You could say he has extensive field and theoretical knowledge. But he's also been cautioning against PGP for a few years now, he likes to believe to be one of the inspirations behind the secushare/pgp post. He even went to the GnuPG developers conf and gave a talk about OpenPGP as such needing upgrades in many aspects.

ep elections 2014


(I have to take a short break from forging code to share my concerns regarding the important upcoming European elections:)

Recent developments regarding the security of the internet show a striking resemblance to western societies apathy towards the crumbling of basic democratic values. Looking a little closer the seeds of the European Union started about the same time a bunch of Californian hippies worked for the military on the internet. The idealistic spirit of those times is a unique heritage, never before did we have a decentralized means of communication and never before did we have such a diverse representation in policy-making as in the European parliament. "United in diversity" - indeed. Let's avoid the sad corruption of the internet to a tool of oppression and keep the EP working in the idealistic spirit of its creators.


Besides legislating on the standard parameters of toothpaste-stripes there are few very important policy domains that point beyond the usual 5 year horizon of the average elected EP representative. The European Parliament has been fundamental in stopping ACTA just 2 years ago. A battle which started long before (thanks wikileaks) the current batch of members of the European Parliament (MEPs) took their seats. Stopping the attempt to install EU-wide censorship - disguised as a child porn filter - was also a success. We have a lot of hope in the recently revised data protection regulation and just this month the network neutrality regulation proposal got saved by a broad coalition against the intent and interests represented by the lead rapporteur.


We lost the unitary patent battle last year - and thus also the EU economy and competitiveness. We still have all kind of data sharing agreements with the US. The network neutrality and the data protection proposal by the EP will also probably go into a second round after the elections. But the council will be smart enough to wait for the results before committing itself to the next step (which seems to involve the UK to veto this in the name of censorship hidden behind the ragged excuse of child porn.) We lost the cybercrime issue as well, vendor liability has not even been mentioned in the final proposal. We also lost the Radio Spectrum Policy Programme, an important initiative about the prospects of the radio frequencies freed up by switching to digital television. Instead of opening up parts of this liberated commons, it is auctioned away to telco companies. With good legislation we could have created a new industry that provides local radio-based internet services. Instead we fed the quasi-monopolies.


Among the many outstanding issues, most importantly ACTA is back on steroids called Transatlantic Free Trade Agreement (TAFTA), a classical FTA renamed to TTIP so it does not sound so scary. Another concerning agreement is the Trade in Services Agreement (TISA), which seems to be coming out of the same corner as TTIP. Similar future challenges are the conclusions of the Data Protection and the Network Neutrality initiatives. Data retention has just been ruled unconstitutional by the European Court of Justice, this topic will surely come back in the next term. The world is copying our laws, let's make sure they are copying good stuff.

We live in exciting times, on the global level Europe has a lot of merit. However the other global players are not interested in a strong Europe, thus Euro-skepticism and national politics plays into our global competitors hands. The NSA scandal is a great example of this, as it shows weak isolated inaction in the member-states. The only serious effort has been the more than dozen hearings on this issue in the Civil Liberties Committee of the EP.


As with many populists movements, the root-causes of euro-skepticism are partly valid and quite interesting. The European institutions are overly bureaucratic, some useless or redundant (looking at EP in Strasbourg for example), non-transparent, undemocratic and quite corrupt. The answer of the euro-skeptics to the broken system is quite wrong, the tool is great we just need to take responsibility, fix it and learn to use it! We are not living in a small isolated town, Europe is a major player in a global competition. As such we must use our power in a concentrated way, we must fix the problems identified by the euro-skeptics and be a role-model for the whole world with positive action like the rejection of ACTA or a strong Data Protection regulation.

I see however a chance to become a skeptic myself. As with any technology, the EP itself I believe it is neutral, what matters is who and how uses it. If we allow the EP to degenerate by staffing it with the corrupt political elite that fails us daily at home, then I see a reason for skepticism myself, but still not against the institution but its inhabitants and rules.


"United in diversity" - indeed. the European parliament has members from 28 countries, between 170-190 parties, even if there are large political blocks - or groups as they're called in Brussels-speak - in the EP. There's no sign of a suffocating and anti-democratic majority dominating the parliament, there's almost always some dissenting splinter-group. Of course in such a diverse crowd there are also all kinds of interests represented, mostly narrow interests. Some are fully legitimate such as the narrow interests of Mediterranean fishers for example are not concerns shared by a polish miner, or less legit meddling of foreign, non-european interests like the tobacco industry, or the US State department, Hollywood, Monsanto, or the pharma industry, you name it. Of course the bulk of the parliament is from dumb populist parties that have no values but lots of closely controlled voters. But for every topic you have some kind of small core group of representatives that is deeply engaged and informed about the issue. Some of these core MEPs can be considered the villains representing narrow industry or interests external to Europe.


Some representatives have a strong interest to strategically serve the diverse European society. Issues like copyright, patents, data protection, network neutrality have been heroically fought over by a handful of few MEPs. These sound like quite technical matters, but they are very much defining our environment and our daily lives. One of the most heroic of all was Amelia Andersdotter the young Pirate MEP from Sweden. Who although started only at half-time of her term - due to the blocking of the french - she took on responsibility as some kind of rapporteur for 17 issues with quite hard topics. She also authored more than a 1000 amendments, putting her way ahead of most of her colleagues when it comes to hard work and representing European social interest. Other notable champions were

...and lot's of others, see the following part:

Ranking of MEPs

The campaigns of the leading political groups are incredibly boring, promising populistic visions of "Jobs, Growth and Security". Let's not get into the statistics and history game about their merits in this regard. Instead let's look at some facts on long-term strategic positions affecting all our society. score-ep.org ranks all MEPs based on their voting behavior on Climate Change, Fracking, GM Crops, Arms Trade and LGBT Issues. The presentation of this data-set is beautiful. Much less visual, and overlapping in the Climate Change dataset I have also prepared such a scoreboard.

Based on the input of four interests groups whose assessment of the MEPS was available to me, this is a ranking of all MEPs serving in the 7th (currently ending) term of the EP. The four data-sets I used came from:

  • La Quadrature du Nets Memopol - and covers various internet and digital rights related topics.
  • Lobbyplag created an assessment based on the amendments submitted in the civil liberties committee to the Data Protection Regulation.
  • CAN Europe, Sandbag and WWF Europe rates MEPs based on votes related to climate change (this is overlapping with the ep-score.org data).
  • Phillip Morris tried to influence the tobacco directive and some of its MEP assessments have leaked to the public and thus into this list ;)

The results: eastern countries and conservatives have the least respect for civil liberties, long-term public good or social benefit. On the good side the official champion is Rui Tavares, he and his green fellows rank highest when it comes to representing the widest interests. Personally I was expecting someone else to come out on top, Amelia Andersdotter. Her problem, she was in the wrong committee - Industry instead of Civil Liberties - only members of the latter got scored by Lobbyplag. If not only the amendments of the civil liberties but also the Industry committee would have been rated she would've come out on top.

The top 10 MEPs

Total Score MEP Country Party
2.8888 Rui Tavares Portugal Bloco de Esquerda (Independente)
2.8809 Jean Lambert United Kingdom Green Party
2.7909 Mikael Gustafsson Sweden Vänsterpartiet
2.6472 Jan Philipp Albrecht Germany Bündnis 90/Die Grünen
2.6333 Pavel Poc Czech Republic Česká strana sociálně demokratická
2.6174 Tarja Cronberg Finland Vihreä liitto
2.6166 Cornelis De Jong Netherlands Socialistische Partij
2.6111 Marije Cornelissen Netherlands GroenLinks
2.6055 Bas Eickhout Netherlands GroenLinks
2.5681 Rebecca Taylor United Kingdom Liberal Democrats Party

The bottom of this list is mostly populated by (french) conservatives.

Ranking of countries according to the 4 criteria:

rank country avg total
1 Denmark 0.729 10.206
2 Sweden 0.723 15.912
3 Netherlands 0.536 15.566
4 Estonia 0.458 2.751
5 Ireland 0.398 5.980
6 Belgium 0.349 8.725
7 Austria 0.325 6.825
8 Finland 0.297 5.056
9 Portugal 0.246 5.920
10 Cyprus 0.206 1.651
11 Malta 0.196 1.767
12 Greece 0.138 3.738
13 Slovenia 0.115 1.042
14 Germany 0.106 11.155
15 United Kingdom 0.052 4.073
16 France -0.003 -0.339
17 Lithuania -0.025 -0.333
18 Latvia -0.035 -0.318
19 Romania -0.044 -1.660
20 Spain -0.068 -4.104
21 Croatia -0.072 -0.875
22 Italy -0.142 -11.390
23 Slovakia -0.149 -1.938
24 Luxembourg -0.174 -1.049
25 Czech Republic -0.180 -4.324
26 Bulgaria -0.346 -7.622
27 Hungary -0.370 -9.634
28 Poland -0.730 -39.423

You can download these datasets in a CSV format that you can load into your favorite spreadsheet editor: meps.csv, countries.csv, parties.csv.


So what I want to say is that, the EP is a powerful tool, there are a lot of important issues, there are a few good people in the parliament, they have been working hard, there's also a few corrupt people in the parliament that have vast industry support. And then we have the majority of the parliament who is so busy with other issues that they have no clue, they amount to about 90-95%. These masses follow either the champions or the villains. We must make sure that we have more champions and less villains and that the remaining masses are aligned with the Champs.

So please look at the rankings, go and vote, express your skepticism of the people who brought us here, not the institutions that have been abused. It matters. Thank you.

generating pgp ids


A proper fingerprint from Wikipedia The tool I release today is genkeyid part of my gpk PGP key management suite, which is a tool that helps you bruteforce arbitrary PGP key ids by modifying the timestamp field of public keys so that the packet hashes to a given key id.

I also release setfp.py which allows you to set arbitrary timestamps in PGP RSA key pairs and recalculates the RSA signature accordingly. You might want to combine this with the other already previously released genkey tools.

The two steps are separated, because the bruteforcing does only need a public key, but setfp also needs an unencrypted private key. So if you want to have a special key id, but also maintain Key Management Opsec, you should do the patching offline in a clean system that you discard later.

For the truly ignorant and the ones having extra clean systems and lots of entropy available in bulk, there's genid.sh which does the two steps in one, generating as many unencrypted keypairs as necessary until a suitable is found.

Of course this is nothing new, there are existing examples of manipulated key ids. Some people have issues with the ambiguity of key ids, but one of the authors of PGP says this is ok. The PGP FAQ has more on this.

get it from github/stef/gpk

Or read more: README.genkeyid.org

Announcing pwd.sh


postits as password managers I wanted to switch to KeepassX to store all my passwords, but I wanted to use GPG to encrypt the passwords. So I came up with pwd.sh. It's a simple shell script that you can bind to your window manager keybindings, and when you invoke it, it uses the current focused window to deduce a key to store the user and the password. For better browsers like Firefox, Chromium, luakit and uzbl this means the currently loaded URLs, for all other windows the current window title. When creating a new password, it is automatically generated only the username is queried. I also wrote a small script that imports all passwords from Firefox into the new format. I'm very happy that now all my passwords isolated from my browsers and they are also protected by my PGP key on my external cryptostick.

When I showed this yesterday in our hackerspace, 2 members immediately installed and started massively improving pwd.sh, thanks asciimoo + potato!

So if you're running linux, like stuff based on the KISS principle, and are a crypto/gpg fetishist you might want to consider trying out this new "keepassx niche-killer" ;)

Check it out: pwd.sh



certificate in firefox I just released tlsauth, a lightweight implementation of a CA and supporting scripts and config snippets that should make TLS client certificate-based authentication a bit easier to set up. The current implementation works in nginx (if someone knows how to do this in Apache, please contribute).

I also provide Flask-tlsauth and Django-tlsauth bindings, available also on pypi. Both contain simple web-based Certificate Authority functions, like sending in CSRs, listing and signing them, and even something similar to regular user registration. With the only difference, that when you are finished registering you have to import the certificate.

So when you look at this from a traditional PKI perspective something is fishy. User registration, and I get a cert back? Wait a minute, shouldn't the CSR be submitted by the user in the first place? Yes. But. :) Considering this from a traditional user registration workflow, the user usually trusts the server with his secret, the password. With TLSAuth however the server drops the secret after creating it and sending it to the user. So with most users blindly trusting their service providers I assume they'll trust them also diligently dropping them. The certs are not very good for anything else than log in to the server. And the CA can produce certs as many as he wants anyway.

Why is this good?

No more passwords

Your users win, because now they only need a password for importing the key into their browser, and then it is protected by the browser master key. This also prohibits users to reuse the same passwords on unrelated sites.

You can also copy your key around and load it on different devices, if you want to be able to access the services also from them, but this only needs to be done once in each browser.

This means also automatic authentication on all services sharing the issuing CA with the clients issuer. This means you can log in to all services on various servers certified by your issuing CA.

With appropriate security tokens you can even store your keys on smartcards and keep you certificates safe from your browser.

No more user databases!

Server operators win because they do not need to store a user database! This removes all kind of privacy issues, and reduces the costs of database leaks considerably.

Your users always send their their TLS cert, which is signed by the CA - you. So when someone comes and says: "hey i'm Joe, here's a certificate about that from you", then you can be sure about it. ;) Also a cert can contain more information, like an email address, or even an real life address for shipping, etc. You decide when you sign your users certificates what your require them to contain.

Authentication on TLS level

You know your client before it even says "GET / HTTP/1.1". This means you can redirect your handler accordingly, showing static only content for unauthenticated visitors, full dynamic server-side scripting and security bugs for trusted peers, and maybe even IMAP or SSH for certain certificates. ;)

Why is this bad?

Bad Browser UIs

On the user side log out is kinda impossible currently. But there seems to be a key-manager for stock firefox - iceweasel is not supported :/ - that could be helpful with log out and other key management related tasks.

It would be nice if the vendors would put more effort behind improving their related user interfaces instead of slacking or reinventing existing protocols.

Losing your phone/tablet/laptop

Losing HW is always a bad thing, especially when you have your certificates on it, hopefully they are protected by a master password in the browser, and full disk encryption on the hard drive. But this should be standard anyway.

Deleting users

CRL or OCSP (and OCSP Stapling already supported in nginx) are the normal way to do this. The question is how to keep track of the serial numbers without exposing the privacy of the end users by keeping server-side database.

Protecting your own CA root key

This is something that kinda makes the operator the weakest link in the whole setup. If anyone has access to your CA signing key, they can MITM attack any connections of all browsers that trust this CA. So you should apply utmost key management security with air gaping and possibly use some kind of cheap HSM like a smartcard or even better.

Loose ends

I understand that TLSAuth does not solve all problems. But for small groups or projects TLSAuth might make a lot of sense. It's perfect for protecting a phpmyadmin from all the probes on the internet, and still make it available to the admins, or you can run your own webmail for all your family and not care about the web as an attack vector.

There's a few open questions and loose ends to be explored here. But I'm quite hopeful to use TLSAuth in future projects, maybe even Parltrack.

Possible Parltrack features


I've been maintaining a list of possible features for Parltrack if the funding campaign hits 10.000 EUR, I'd be interested to hear feedback and other suggestions to this list:

Monitor by subjects

Parltrack already provides listings by subjects (e.g. Protection of privacy and data protection) but there's neither a possibility to subscribe to any changes or new dossiers to these listings. Also missing is currently a user interface where users can browse and select all existing subjects. This feature would allow for broad tracking of policy areas instead of the currently supported dossier-by-dossier tracking.

Monitor by search phrase

Simply enter a search phrase and your email and get notified, if any dossier appears or changes that contains this phrase in its title.

Subscription management

A user-interface to better manage your subscriptions to things you're monitoring.

Visitor Trends

Display any trending dossiers or MEPs based on the visitor access statistics. This way you can identify what or who is currently hot in the EP.

Amendments from the 6th term

Adding also the amendments from the 6th parliamentary term between 2004 and 2009, different formats require the tuning of the scrapers to handle also these earlier documents.

Historical view

The preservation of historical data allows to present also snapshots from previous points in time. A nice timeline visualization is also imaginable.

Localized Parltrack data

Parltrack currently only scrapes in English, some information is easily scrapable also in the rest of the 22 European languages. Some might be harder, but for NGOs it would definitely make a difference, having this information also in their native language - especially if we're talking about re-users of the liberated datasets.

Commenting on dossiers and MEPs

Last but not least a feature that I have been long contemplating. It would be nice to somehow merge Pippi Longstrings, Herr Nilsson and Parltrack into a useful bundle, creating a possibility to comment on the legislative proposals and their procedural meta-information in one location. The issue with this is, that a public service like this needs a lot of moderation, and I fear that serious NGOs would not want to trust their internal political insights and commentary with an untrusted 3rd party like Parltrack. This feature is also the basis for the 750 EUR perk in the campaign by the way ;)


So this would be an initial list of medium to big features to be added, in addition to the site redesign and various small improvements that come up in the mean time, with possible other yet unplanned features to be added to this list. I expect this to occupy me for about a year especially if we reach funding levels that allow me to add new data sources as well.

There is also continued cooperation with NGOs reusing the Parltrack database, like with La Quadrature Du Nets awesome Political Memory and the just recently started Lobbyplag initiative which wants to expand its operations beyond the Data Protection dossiers.

If you agree with all or some of these goals, please consider supporting the current fundraising campaign by donating and making other people aware of this initiative. If you feel some important thing is missing let's talk about it, information and financial feedback are both important for the future of Parltrack, thank you.



EP - ACTA vote About two years ago Parltrack started as another tool trying to get some information that was necessary at that time. Since then the amount and quality of data in Parltrack has come a long way. One year ago, I had to rewrite all the scrapers as the European Parliament upgraded their website. A couple of related tools have been developed, for example Herr Nilsson or - the most widely-known - Political Memory or memopol as we call it. Also ACTA has been defeated. I believe Parltrack contributed a small part to this success. Having recent and good data on the ground was essential for campaigning in and around the European Parliament.

I think Parltrack is a tool with lots of potential. I'd really like to find some more time to just data-mine Parltrack, which was one of my initial motivations when I started this project. As a good friend used to say: most of our work in the commons is financed by pre-accumulated wealth from the traditional system. The peculiar nature of this open data combined with free software makes it somewhat difficult to keep this project sustainable. I've tried Flattr, debated and rejected advertising, offered consulting/custom development jobs, and turns out i'm too small to be eligible for EU funding grants. Depleting resources resulted in a shift of my attention lately to other jobs, however Parltrack seems to be used quite a lot. The lack of maintenance already started showing, so to stop this degradation and to allow me to focus more on Parltrack in the coming year I started an Indiegogo campaign. If you care about freedom, datalove, kittens, puppies, or just me, go here and support this campaign. It will allow me to build more free infrastructure.

thanks, s

Thank you to all my friends who helped me setting up this campaign.

ps: for Parltrack related news you can follow @Parltrack, and RSS updates



src:http://guckes.soup.io/post/19675336/Fear-FEAR The usage of "cyber" as a prefix is a strong hint for lack of detailed knowledge into a certain topic, the intent to make a profit or take control by diluting the exact issues. Hiding the issues behind such muddled phrasing does not help the understanding and possible solutions.

The more often you hear "cyber" the stronger should be the sense of your "bullshit-meter" signal. Chances are high, that it's about spreading FUD to sell an oppressive and expensive security theater - cyberfud is for the internet, like the liquid-bomb was for airport "security".

So if this greed is only going to make us more oppressed, not safer then how to deal with all these menacing online threats that we hear about in the evening news?

A very wise man said[MP4 video]:

"...I'm suggesting, the internet itself can in no more meaningful sense be secure, than the oceans are secure. The security activities in the oceans, there's the "law of the seas", there are many aspects of it, but the functioning of humanity has depended on the openness and diversity of the seas and i think it depends similarly on the openness and diversity of the internet..."

There's a saying in software development: "a bug is cheapest, when caught as early as possible in the development process". Meaning it's cheaper to fix bugs during unit testing, than after they've been shipped to customers. So instead of starting an arms race to create expensive defensive snakeoil technology, we should focus on making the software more resistant. There's excellent examples, some critical infrastructure - our browsers - show a good understanding of this principle:

Compare this with Siemens not fixing the bug for 625 days that enabled the Stuxnet malware to operate.

It is irresponsible that a vendor waits 625 days to fix bugs that can affect critical infrastructure. Choosing the right words is important, forget cyberfud, here's a positive message:

Responsible Vendor

Closed-source vendors that have a consistent track record fixing bugs promptly and exercising diligence should be awarded, those who are not, should be penalized with full liability.

Instead of spreading cyberfud there should be a publicly available resource where users can check the security track record of vendors, vendors must be absolutely transparent about the vulnerabilities in their products, and it must be possible to objectively compare, measure and rate the vendors according to this data. Procurement decisions must be based on this as a obligatory condition: "no transparency and no sign of responsibility, no contract"

This idea of vendor liability is not new, hackers raised this issue already 14 years ago in a testimony before the US Senate.

I know, this issue cannot be solved solely by suddenly turning this industry into responsible vendors, among others problems are:

  • irresponsible customers disabling security features
  • restrictive laws outlawing security tools reduce the defensive capabilities of the network (like outlawing the immune system),
  • education, instead of paternalizing users into a victim role,
  • increased privacy awareness on the demand side and a strict adoption of the "data-minimization" principle would reduce the amount of "bounty out there" to grab.

The next time you hear about a cyberfud event, or hear some industry guy talking cyberfud, ask a few unsettling questions about commercial vendors externalizing the costs of security that are then exploited by greedy security-corporations and politicians. You are also free to ridicule:

"ich find ja, daß william gibson der einzige ist, der 'cyber' sagen darf, ohne dabei blöd auszusehen" — fx #alternativlos #ftw

(Translation: "The only person who is allowed to use 'cyber' without looking stupid is William Gibson".)

PGP key generation


With the usage of PGP in everyday life our communication is mostly state of the art, and quite expensive to compromise. The weakest links are nowadays the systems where the communication terminates and is decrypted to plaintext. Not only are the messages available unencrypted, but also the encryption keys. Proper key management becomes essential, however diligent key management is something that not even the German Wehrmacht was always able to do properly. :) To reduce the probability of errors, there's a script at the end that automatizes most steps.

One essential aspect of key life-cycle management is key generation.

Note: most of the procedure below can be substituted by using an OpenPGP smartcards which allow to generate keys, that cannot be extracted easily, all signing and decryption happens in the smartcard itself. Such smartcards however usually have certain storage limits. Current technology allows usually 3 keys with 3072 bits, some newer models also 4096.

Generating a new key

Needed things:

  • a secure offline environment for key generation,
  • secure offline location to store the signing key,
  • Another secure offline location to store a backup of the signing key,
  • A third secure offline location to store a revocation certificate,
  • A pristine offline system for generation and handling of the key,
  • 3 distinct and strong passphrases

Offline system for key management

The biggest threat to key generation is a trojan/malware compromised system that leaks not only the keys but also captures the keystrokes of the passwords. To counter this threat it is strongly advised to use a pristine live CD to boot into an offline environment (yes, disconnecting the network cable is a good idea anyway). I like Tails for such a live system, but Privatix or Liberté Linux might be similarly useful.

Most PGP keys consist usually per default of a signing and an encryption key. The web of trust is woven by signing other peoples signing key. However there's a trade-off, either the key has a limited lifetime, and we have to ask our peers to re-sign the new key from time to time. Alternatively you create an unlimited signing key, but then handling becomes difficult as you want to protect this key with increased diligence. Multiple sources online suggest creating a master signing key, which is only used (again offline - see a pattern here?) for signing other keys and at least two subkeys, one for signing anything else and one for encryption. This allows you to regularly update your sub-keys without the need to re-sign them with your peers, only your master key needs to sign the new sub-keys. Regenerating your sub-keys regularly also reduces the lack of perfect forward secrecy in PGP.

All three keys should be protected by strong passphrases. As signing and encryption are not necessarily happening with the same frequency and applications, so it makes sense to set different passphrases for both. That means 3 passphrases in total, sounds hard to remember. Instead of using a long random password, rather use a passphrase consisting of at least 5 words. Now instead of 14 random letters you have to remember only five words, that should be manageable - especially if you make up a small story out of the words. One way to generate such passphrases is the Diceware method, another way is this simple script which replaces dices with openssls rand and the word list can be any size that fits in memory - this can be seeded with any kind of word list that you trust.

When you generate a key, it is good practice to also generate a key revocation certificate for the case your key gets compromised. In cases where you might not have the key available anymore to generate such a certificate having one ready up-front can prove useful. For cases when you lose or destroy the encrypted container containing the private keys it is also useful to have a backup ready - after all it took a lot of entropy to generate the key, don't waste it ;).

Both the revocation certificate and the backup are hopefully very rarely needed, but should be well protected. You can choose to use passphrases and encrypted containers, or you can encrypt the cert/backup with a 128 byte cryptographically strong random key and use Shamirs Secret Sharing Scheme to split up the encryption key into multiple parts and distribute them geographically and perhaps to trusted persons. Only when a preset amount of shares is presented, can the backup/cert be accessed. You could also generate a third set of shares for your backup, in case something happens to you and you want your family, friends or lawyers be able to read encrypted data belonging to a certain private key...

As you should very rarely need to generate such a full key and it is quite a complex procedure, there's a script that tries to automatize all the steps above. It depends on gpg, gpgsplit, srm, openssl and ssss, of which i think ssss might be necessary to install manually on tails. The script generates all interim material into /run/shm so that no trace on storage media is left, you have to move the various pieces yourself to their final location, like importing the subkeys into your keyring, distributing the shares for the backup and the revocation cert and the master signing key and it's backup copy. I will try to cover the storage of keys on dedicated USB sticks in a later post. I hope you enjoy your new pimped keys (oh and by the way nothing prevents you from having more than only two subkeys).

Comments and improvements are welcome.

amendments in parltrack


Here's a sneak preview of an upcoming parltrack feature:


The data is possibly not complete, but gives good additional information. If everything goes well this will be integrated into MEP and dossier views. Until then you can change the dossier id in the url above, or replace it with a name of a mep:


Some stats on the data,

  • total number of amendments in the 7th term so far: 168917,
  • amended dossiers: 976,
  • amending MEPs: 775.

top 3 MEPS:

  1. Olle SCHMIDT: 2038
  2. Philippe LAMBERTS: 1974
  3. Silvia-Adriana ŢICĂU: 1610

top3 amended dossiers:

  1. 3075: Structural instruments: common provisions for ERDF, ESF, Cohesion Fund, EAFRD and EMFF; general provisions applicable to ERDF, ESF and Cohesion Fund (2011/0276(COD))
  2. 2482: Common Fisheries Policy (2011/0195(COD))
  3. 2310: Public procurement (2011/0438(COD))

If anyone wants to play with the raw data:


And to see what data might be missing:


Tunnel daemons


molerat Various methods of tunneling ssh connections to pierce through restrictive firewalls. The following setups are evaluated:

  • HTTPTunnel+stunnel4, moderately difficult to setup, but once installed it appears as legitimate HTTPS traffic.
  • Iodine, setup needs the most effort, however when done and the network allows DNS queries it works quite reliably.
  • CurveCP, setup is quite easy, when done the link is encrypted and fast. However the firewalls that allow UDP/53 to pass are somewhat limited.
  • ICMPTX, setup is quite easy, however there is no encryption, use it only to tunnel encrypted traffic like ssh and such.
  • TOR, setup is easy, usage is a bit delayed due to the latency of the Tor network, requests look like normal HTTPS traffic.

The setup with the most effort seems to be also the most reliable, an iodine based link over DNS can break out of a lot of networks. If we can use HTTP to browse but other services are restricted then the httptunnel is adequate. For less setup-hassle but increased latency Tor tunnels also deliver reliably. The usefulness of ICMP and CurveCP tunnels depend on the firewall configuration, but if they work, they're pretty fast.


This method is generally useful in heavily restricted networks, where you can only use the web for browsing, but no other services are allowed.

We use the fine tool httptunnel for masking our ssh connection. However httptunnel is not encrypted, and thus also the ssh handshake can be identified in the traffic. To avoid that, we put a tunnel into our tunnel using stunnel.

On the server

First we need to generate the SSL certificate:

openssl req -new -x509 -days 365 -nodes \
       -out htcert.pem -keyout htcert.pem

Set up an stunnel, make sure to set the ip address, and the user and group exist:

/usr/bin/stunnel -f -r \
     -d <public ip address>:443 \
     -p htcert.pem -s stunnel4 \
     -g stunnel4 -P ''

When this is done, we can run httptunnel to connect the sshd with the stunnel:

/usr/bin/hts -w -F

On the client

Get the generated certificate from the server (don't forget to remove the private key part). You need to rename the cert to it's hash value and append a '.0':

mv htcert.pem $(openssl x509 -noout -hash -in htcert.pem).0

Now start the stunnel:

sudo stunnel -c -d \
       -r <server-address>:443 \
       -s stunnel4 -g stunnel4 -P '' -a . -v 3

We need to set the server address (can be IP or name-based), make sure the user, group exist.

start the httptunnel:

htc -F <httptunnel-port>

The tunnel will listen on httptunnel-port. Enjoy your ssh-over-https:

ssh -p <httptunnel-port>

Over DNS

In some cases internet access is blocked however DNS traffic is allowed to pass, allowing us to tunnel through DNS.

If you can setup a special DNS entry for this, tunneling through DNS is very easy using the excellent iodine tool. Follow the straight-forward installation instructions.

Use this method if the network allows resolving of names, even if a local DNS server is forced on us, the tunnel will still work due to recursive queries hitting your "authoritative server".

Hint: you can manage and delegate a DNS zone for free on affraid.org, if you don't have your own.

Using CurveCP on UDP/53

The drawbacks of using any DNS protocol based tunnel like iodine, are that the tunnel has size-wise a huge protocol overhead, you need to setup a slightly uncommon DNS configuration and domain names you control are usually registered on your real name. If the firewall does not force the usage of a local DNS server and it allows traffic to UDP/53, then CurveCP tunnel is a preferred option.

Alternatively you could also run on UDP/80 or other allowed UDP ports.

On the server

Note: During testing I had to recompile CurveCP as the address family was missing from the bind call, see the patch at the end of this post.

Create a server key:

curvecpmakekey serverkey

convert the key to hex, and store it on the client in serverkey.hex:

curvecpprintkey serverkey > serverkey.hex

run the curvecpserver:

curvecpserver <your host name> \
                serverkey \
                <your ip address> \
                53 \
                00000000000000000000000000000000 \
                curvecpmessage /usr/sbin/sshd -i

On the client

This depends on socat the excellent swiss-army knife of socket handling.

Store the serverkey.hex that you generated on the server and run the client:

curvecpclient <curvecpserver hostname> \
    $(cat serverkey.hex) \
    <curvecpserver ip address> \
    53 \
    00000000000000000000000000000000 \
    curvecpmessage \
    -C sh -c "/usr/bin/socat tcp4-listen:9999,bind=,reuseaddr,fork - <&6 >&7"

Start your ssh-over-curvecp:

ssh -p 9999


Using ICMPTX you can set up tun devices that tunnel over ICMP, which is quite handy as in some cases it's not filtered and allows to pierce through the blockades. ICMPTX creates a local network device, so tunneling anything is quite easy after setup.

On the server

Simply run

(sleep 1; ifconfig tun0 netmask )&; icmptx -s <server ip address>

On the client

Simply run

(sleep 1; ifconfig tun0 netmask )&; icmptx -c <server ip address>

sshing to your box is then a simple:


Using Tor

Tor is great for hiding traffic, it's latency is a bit bigger than usual, but it's quite possible to get work done through tor tunnels even with ssh. If you configure your client-side tor proxy to use a tor bridge that runs on port 443, then the tunnel looks like casual HTTPS traffic.

There's two options, you can connect from a tor exit node to your normal ssh server, in this case skip the "On the server" part, and use your normal hostname instead of the .onion address referenced there.

On the server

If you want to run your ssh tunnel as a tor hidden service you simply have to add the following two lines

HiddenServiceDir /var/lib/tor/sshtun/
HiddenServicePort 22

to your /etc/tor/torrc. And find out the hostname of your new hidden service with:

cat /var/lib/tor/sshtun/

On the client

You simply need to call the torified ssh:

torify ssh <.onion hostname from server>


Stubs for running the server-side daemons using the excellent runit tool can be found on github. These can be most easily installed using deamonize.sh. For client-side setup use the instructions in this post.

curvecp patch

curvecpserver had to be patched, as the address family in the bind call was uninitialized, the patch is below:

diff -urw nacl-20110221/curvecp/socket_bind.c nacl-20110221-new/curvecp/socket_bind.c
--- nacl-20110221/curvecp/socket_bind.c 2011-02-21 02:49:34.000000000 +0100
+++ nacl-20110221-new/curvecp/socket_bind.c     2012-08-19 02:52:25.000000000 +0200
@@ -9,6 +9,7 @@
   struct sockaddr_in sa;
   byte_zero(&sa,sizeof sa);
+  sa.sin_family = AF_INET;
   return bind(fd,(struct sockaddr *) &sa,sizeof sa);

pippi matures


New filtering interface for pippi I wanted to pippi CETA with ACTA, some other FTAs (Korea, Cariforum) and some other docs, but found it difficult to do so. So during the last days I revamped pippi a bit.

The results are a new browsing interface, where you can directly start pippifications of documents. Clicking there on the "Pippi ★" button uses the currently selected document and compares it with all shown starred documents. You can quickly filter all documents based on their title (this search uses powerful regular expressions), you can filter on your own documents (more on that later) or starred documents. Later is useful for running a pippi against a greater selection of reference documents.

Another new feature is that it is encouraged to be logged in when creating documents, this allows you to later edit the title of the document and to delete it if it is not yet pippied against other documents. Being a creator of a documents allows you also to access these documents more easily by using a filter for your own collection when browsing documents.

So pippification of CETA against all those other documents was easy:

  1. I created all the documents (e.g. i copy/pasted ACTA from Oct 2011 from http://www.euwiki.org/ACTA/Tokyo_oct2, and used a bunch of CELEX ids for documents available on eur-lex.)
  2. I went to "browse" and filtered on my own collection,
  3. I starred all the relevant documents for pippification,
  4. I selected from this list CETA, so that it is displayed in the Details
  5. I hit "Pippi ★" and after some delay I got the pippied results presented.

The result looks like this: http://pippi.euwiki.org/doc/ceta_ipr_2012feb

Hint: if you enter ACTA in "Filter by tag" in the top bar, then it hides the copies from the other documents...

digitális tudomány konzultáció


Tavaly ősszel volt egy bizottsági konzultáció: "Consultation on scientific information in the digital age", erre beadtunk az FCForum és az EDRi kooperációjában egy véleményt (pdf). Januárban kijött az eredmény, és elég egybehangzó a vélemény. Ide kapcsolódik, hogy az Elseviert - a tudományos lapkiadók piacvezetőjét - elkezdték az akadémikusok bojkottálni, elegük van a kizsákmányolásból nekik is.

Végezetül pár kivonat az eredeti véleményből:

Our world has progressed from the economics of scarcity to an economy of abundance - at least when it comes to knowledge, information and data. This radical and ongoing shift is affecting all spheres of life, from the entertainment industry to public sector information. Scientific research is sadly an area where the fruits of this change have not begun to be harvested, despite the fact that the internet, which is the most important agent of change in this respect, was born in the research community. It is a precondition for meeting the EU's agenda to be a global leader in innovation to harness and nurture the generative nature of internet-enabled collaboration.

The role of Europe

We feel that policy formulation at the European level on access and preservation issues is essential to make progress on these issues and therefore agree strongly. This is for two reasons: first, scientific research was borderless even before the advent of the information age. Second, the legal frameworks surrounding issues of access and preservation have been to a large extent subject to legislative efforts at the European level. Although no effective harmonisation has come from the Copyright Duration Directive (93/98/EEC), Copyright Directive (2001/29/EC), the Database Directive (96/9/EC) and IPRED (2004/48/EC) , they do affect access and preservation issues to a large extent.

Moreover, we feel that the the following problems need to be addressed, primarily in order to be able to pursue Europe's ambitions in science, technology and sustainable economic development:

A majority of raw research data is not accessible to the scientific community as a result of database rights and/or other limitations. Or at least not accessible without strings attached. Examples of this are the results of clinical trials of new drugs. There have been several cases where early access to this data would have prevented harmful substances being prescribed (e.g. the Paxil and the Vioxx scandals)1.

The current model of the scientific publishing industry is fundamentally broken. Authors submitting articles to scientific journals are unpaid or even have to pay for publication ("author-pays" model). The editorial boards and the peer reviewers of scientific journals are effectively unpaid. The cost of printing journals and of their dissemination has dropped in the past decades. The price of scientific journals nonetheless keeps on escalating2. The profit margins of the scientific publishers are exceeding 35% now, while the general periodical publishing industry operates at a margin of less than 5%. According to financial analysts, no value is added by the scientific publishers that remotely justifies these excessive margins3.

We suggest than a comprehensive reform should include at least the following actions:

* Database protection should be abolished. Irrespective of any rights that preexisted the aforementioned Directive, there should be an harmonised EU rule that factual data is not eligible for copyright protection.

* Both the duration and the extent of copyright protection should be revised downwards. Any policymaking should take into account that reuse of information is essential for scientific progress.

* The EU should harmonise the transfer of copyrights from the original author to others. Such a transfer would have to be temporary and subject to compulsory registration.

* By extension, further expansion of IPR-enforcement powers through unfortunate directives such as IPRED should be curbed. In this vein ACTA should not be ratified by the EU since it can only worsen the situation in this regard. The chilling effects of excessive damages, provisional measures and injunctions in this field cannot be underestimated.

We also agree that co-ordinating existing initiatives in EU Member States would be an appropriate role for Europe.

Furthermore, we agree that Europe should be involved in supporting the development of a European network of repositories (online archives). In addition to this, we feel that online archives should use open standards as meant by the EIF 1.0 definition to the furthest extent possible, in order to foster genuine access to knowledge. Whenever possible, scientific information should be public, the same way legislation and jurisprudence are.

Finally on this point, we strongly agree that Europe should encourage universities, libraries, funding bodies, etc., to implement specific actions. A specific action should be that (European) funding of scientific research should be made contingent on a) unencumbered disclosure of both raw (provided that there are no privacy issues with raw data) and processed data and b) publication through open access scientific journals. Release early and release often, to borrow a mantra from the highly successful open source software development community, should be the credo of European research.

...we would like to stress that the current process of public funding of research by the EU is deeply flawed, but the problems are not unsolvable The biggest problem is that the areas of research that receive public funding are currently selected on the basis of framework programmes of half a decade ago. To quote a commentator in Forbes Magazine: ".. the system of awarding funds is insular, long winded and in no sense responsive to markets – these are five and six year plans, laying out innovation priorities from, say 2007 – 2013. Which areas of research should receive money is decided by the people who will bid for it and projects are assessed by people who are also applying. The impetus for change is dampened by the weight and self-serving nature of the system."4

An alternative avenue that is worth exploring is the use of competitions to solve specific scientific challenges. This model has been deployed successfully by DARPA and private actors like the X-prize challenges or the InnoCentive marketplace5.

Furthermore, user-driven innovation should be fostered. Examples of this phenomenon are so-called fablabs which put prototyping equipment in the hands of artists and designers. Even more grass-roots are hackerspaces, which despite their tremendous difficulties with housing and materials, have already shown to act as incubators for SMEs6.

1. Jasonoff, Transparency in Public Science, Purposes, Limits, in: Law and Contemporary Problems, vol. 69, iss. 21, Summer 2006, pp. 21-45.

2. see also Glenn S. McGuigan, Robert D. Russell, The Business of Academic Publishing: A Strategic Analysis of the Academic Journal Publishing Industry and its Impact on the Future of Scholarly Publishing, in: Electronic Journal of Academic and Special Librarianship, v.9 no.3 (Winter 2008)

3. http://southernlibrarianship.icaap.org/content/v09n03/mcguigan_g01.html

4. http://www.forbes.com/sites/haydnshaughnessy/2011/07/11/europes-disintegration-its-not-about-the-piigs-or-the-euro/

5. http://www.innocentive.com/

6. An example of this is innovation award winning soup.io

hacktivism hour cccamp2011


At the camp we had a very inspiring radio show [ogg]. One interesting topic that came up was copyright abolitionism. As a free software developer it's hard for me to accept the loss of protection by the GPL from closing down free software. I can agree however, that we must change our discourse from copyright to alternative ways of incentivising value creation. If we can replace copyright with alternative systems that empowers creators and amplifies creation while somehow also preserving the 4 basic rights of free software, I'm all OK with that. (I admit the mixing of free software ideas with more general value-creation, needs some more refinement, but you get the idea.)

< prev posts

Proudly powered by Utterson