• 18Oct

    Gwyneth Llewelyn recently offered a proposal to try to plug “the analogue hole” that makes content theft inevitable. Her proposal drew a lot of criticism, particularly from open source developers, and she has since withdrawn it.

    I’m glad to read that she has; I was among those with objections to the proposal. But I’m disappointed by her reaction to the criticism she received:

    The current community of developers — and by that I mean non-LL developers — is absolutely not interested in implementing any sort of content protection schemes.

    … Their argument is that ultimately any measures taken to implement “trusted clients” that connect to LL’s grid will always be defeated since it’s too easy to create a “fake” trusted client. And that the trouble to go the way of trusted clients will, well, “stifle development” by making it harder, and, ultimately, the gain is poor compared to the hassle of going through a certification procedure.

    I won’t fight that argument, since it’s discussing ideologies, not really security. Either the development is made by security-conscious developers, or by people who prefer that content ought to be copied anyway (since you’ll never be able to protect it), and they claim that the focus should be on making development easier, not worrying about how easy content is copied or not.

    … “Technicalities” are just a way to cover their ideology: ultimately, they´re strong believers that content (and that includes development efforts to make Second Life better) ought to be free.

    Despite what Gwyn suggests, one can object to a specific content protection scheme without being an ideological extremist who believes that everything should be free. Yes, there are individuals who take that viewpoint. Many of them are quite vocal, and some are rather arrogant and obnoxious. (I am of the opinion that this latter kind ought to be swatted hard over the head with a rolled-up newspaper. Repeatedly.)

    But to imply that anyone opposing her proposal must be some kind of anticommercial tekkie-hippie is fallacious and juvenile, and just as dismissive as the rudest comments she received. I must admit that I expected better from Gwyn.

    Now then, let me explain my opposition and criticism of the proposal. (This is not criticism of Gwyn as a person, nor of any of her other ideas besides this particular proposal.)

    While I do appreciate and respect the choice to make one’s own efforts open and free, I do not believe everything should be forced to be free, and I did not oppose the proposal based on my views on that topic. I opposed it because I see three major flaws in the proposed system, two of them purely security-related:

    1. the certificates could be easily forged, which defeats the purpose of having them at all
    2. an effective certification system would put an extraordinary burden on developers
    3. the system does not address the most commonly exploited methods of content theft

    I’ll expand on these points so that there can be no confusion about why I objected and still object to such a system. (I’ll give fair warning, though, that this is a rather long and probably dull post by most standards.)

    Firstly, as others have said: where there is a certificate, there is a way to forge a certificate. Even a certificate embedded in the executable binary code can be extracted, and an uncertified client created which fools the server into believing it is a certified one.

    And the target for such certificate extraction would not be the open source developers with their little custom viewers, it would be Linden Lab’s own released clients. Why? Because even after LL has found out that someone extracted the certificate to one of their official releases, they could not do anything about it.

    Would they block all viewers using that certificate? Doing so would block all users who are using LL’s own viewer! Users would be forced to download a new version every day as LL struggled to keep ahead of the malicious users extracting the new certificates. (Plus, they would have to use a different method of embedding certificate for every release, since the old methods would have been figured out.)

    Unless LL is willing to try to keep that up, the certificate program would cease to be effective at distinguishing authorized viewers from unauthorized ones.

    Secondly, let me explain the burden of such a certification process on open source developers. It is not a matter of developers complaining, “Waah, it would take me 10 more minutes to implement my code! I’m going to go cry in a corner now.” A certification process as Gwyn described would, if effective, put a near-total damper on many of the most important areas of development of the viewer code.

    Suppose that you are trying to fix a memory leak, like Nicholaz used to do. In order to test whether you have fixed it, you would need to run the viewer and travel around SL to make it load lots of content into memory. However, because of this certification program, you must have a certificate embedded in your viewer in order to see such content, and thus to test the fix. (I am assuming here the embedded certificate scenario, because keeping the certificate as a separate file would render the system extraordinarily easy to spoof.)

    Every time you recompiled your code, you would have to send your source code and compiled viewer off to whatever company does the certification and wait for them to look it over and embed the certificate in it and send it back. Such a process would take at least several days at the start (after all, they have to inspect your source code to make sure you haven’t added something nasty!). That’s several days of waiting between the time you write the code, and the time you can check whether it worked. If you find out that it didn’t work, you will have to wait another several days for them to certify your next attempt.

    Now, I said it would take several days at the start. If the company could not keep up with the number of developers requesting certification for their little test versions, then the delay would grow longer and longer. You would soon be waiting a week, two weeks, a month for them to process the certified binary — if you (or they) hadn’t given up completely by then.

    A month between testing bug fixes? I can barely even recall my approach to the problem after two days! No developer or other creative person would volunteer to work in such a stifling environment.

    Now, I admit, not every bug fix would require a certified viewer to test with. But many of the worst, most disliked kinds of bugs would. I don’t presume to speak on Nicholaz’s behalf, but I doubt his work would have been possible if such a certification process had been required.

    Fortunately, though, the weakness of the system means that many developers would soon begin to use forged certificates to circumvent the system and continue their testing work. Unfortunately, such circumvention would, more likely than not, be illegal in the United States due to the DMCA, and in any other countries with similar legislation. Some developers would be willing to take that risk for the sake of improving the viewer, but many not.

    Finally, even if we set aside the other issues, the proposal does not address the most common methods of content theft. Even if the process stopped uncertified viewers from being able to see the protected content, the certified viewers still have gaping holes in them.

    I will set aside the issue of GL ripping, something that cannot possibly be addressed by Linden Lab, and instead focus on the insecurity of the cache and the data packets that are sent by the sim.

    The viewer as it is now does not employ any real encryption of either the cache or (I assume, though I may be wrong) the data going to and from the server. Textures are easily extracted from the cache; prim shapes are (I hear) also cached, and thus can be extracted. Even if they were not cached, textures, prim shapes, avatar shapes, animations, and more could be extracted from the data packets being sent by the sim to your computer.

    That is the current state of affairs. Well, what if Linden Lab did start to encrypt data packets and the cache? In such a case, the viewer must have programmed into it the decryption method and key (or else a method of receiving or generating the key). But because the viewer code is being released as open source, Linden Lab would face the difficult choice of whether to release or to withold that piece of code. Either choice would have significant consequences.

    If Linden Lab chose to release the code for the decryption and keying methods, the encryption scheme is defeated. By studying the code, a reasonably proficient content thief could figure out how to decrypt the cache and incoming data packets, and thus gain access to the protected content, even while running a certified viewer. For a while, the barrier to content theft would be somewhat higher than it is now, but eventually some malicious user would release a tool to copy content without needing to understand the concepts behind it. We would then be back to the state we are in today, but with an ineffective encryption scheme added on top.

    On the other hand, if Linden Lab chose to withold the decryption code, open source viewers would be locked out. Or at least, they would not have access to the protected content. So in that respect, yes, a combination of certifying only LL’s viewers, encrypting the cache and data packets, and keeping the viewer partially closed source might be effective at protecting content, for a while. I’m not a cryptography expert, so I can’t even begin to guess how long it would take to break the encryption. Perhaps long enough that LL could change encryption keys and certificates (and force everyone to upgrade) before the old ones are found out.

    But doing so would impose the burdens on developers I described earlier. And worse, as Gwyn so rightly points out, it would offend and alienate most of them — even the ones who aren’t obsessed with freeing everything. I’d expect it to light a fire under OpenSim, too, as developers shifted their focuses to more open, less burdensome systems.

    Would Linden Lab be willing to undo the Dia de Liberation, cripple open source development, and turn some of their most effective supporters into resources for the competition, in exchange for the appearance of strong content protection?

    Maybe. I hope not, but it’s hard to say anymore.

    So is the situation hopeless? Can content theft be stopped only by sacrificing open source development? Can content theft even be stopped?

    Certainly, it’s not possible to completely stop 100% of content theft. Gwyn recognizes that, as does just about anybody familiar with the problem. The idea is, instead, to raise the barrier for content theft; to make it more difficult.

    But even that is a matter of debate. If you raise the barrier, eventually someone will figure out how to get over it, and they or someone else will create a tool to let other people get over it, even people who have no idea how the tool works or what it’s circumventing. You can raise the barrier again and again, but that only buys you time. And often, that temporary gain in protection comes at the cost of a permanent loss elsewhere — creating extra hassles for users, stifling development, etc.

    Can content theft be prevented? Probably not. I wish it could; I’d love to have a punchy ending here, to reveal that if you do X, Y, and Z, all content theft will be stopped. But it doesn’t work that way.

    More likely, the key to dealing with content theft is not prevention, but rather detection and enforcement, neither of which are being carried out in a reliable, objective, or effective manner.

    Naturally, fixing them is easier said than done. But that’s how it is.


    Posted by Jacek Antonelli @ 8:34 pm

7 Responses

WP_Cloudy
  • McCabe Maxsted Says:

    I really don’t see how any copy protection scheme will work as long as LL is lackadaisical about the problem. Where is the police blotter for content thieves? The walls of shame? Just why do so many theft-related JIRAs sit unattended to? Would love to hear some answers on that. I highly doubt most people know the consequences of getting caught ripping off other people’s work, even, although perhaps that’s LL’s strategy: keep the topic low, hush-hush; don’t let everyone know how easy it is to copy (or rather, find the resources you need to copy) content in SL. I can’t see that lasting, though. Sooner or later, someone is going to release a “theft viewer” that’ll hit mainstream, and it’ll do for SL what napster did for mp3s.

    (And contrary to what some have said, it’s OS developers’ consciences that keep these tools out of the mainstream, not our lack thereof).

    Wish there were an easy answer for this.

  • Eugene Says:

    Nice article. Thanks. :) Eugene

  • Cee Ell Says:

    After suffering through all the sturm und drang on this matter over in ‘Second Thoughts’, I thought I’d better go look to the source. I didn’t find anything like ‘Bolshevik Open-Source thugs mugging’ poor Gwyneth in either the original proposal’s comments, nor the JIRA entry.

    Back in _this_ universe, though: neither did I find any sort of broad-based response — much less an antagonistic groundswell at the level even Gwyneth implied in her retraction spot. Between the two locations, there was a grand total of ten (10!) responses, of which perhaps two could be construed as dismissive or impolite. More to the point, virtually all of the comments (including the impolite ones) raised at least reasonable technical difficulties with the proposal — and not a one of them even alluded to the notion that “information, including _your_ digital content, must be free!”

    It’s unfortunate, and a little mystifying, that she drew the conclusions that she did from these comments. In any case, I have to disagree with the assertion that it’s a good thing the proposal was withdrawn. Establishment of security (and specifically, strength of encryption) schemes ultimately benefit from the give-and-take of protracted, detailed exchanges. It’s not impossible that something useful can be made of this proposal, or one that serendipitously arises from its discussion. Taking your ball and going home is not the way to that end.

  • Harke Hartnell Says:

    great article.

    unfortunetly, McCabe, there’s no easy answer except:

    “That’s a lot of hard bullshit.”

    I totally agree with you, LL’s strategy of keeping things on the down-low is detrimental to everyone. They don’t fix anything until it’s publicly known as an exploit, and that’s a shame, it’s akin to releasing a car with faulty brakes, but nobody who’s crashed has lived to tell about it, so its no big deal.

  • Dale Innis Says:

    Yeah, I was quite surprised and disappointed in Gwyneth there also. She proposed a technical idea that had fundamental flaws, people pointed them out, and she concluded that people must not really be interested in solving the problem at all. Bit of a non sequitur! Thanks for going into such depth of analysis; you are a paragon of patience. :)

    (I vaguely recall having posted some wise insightful comment to her “withdrawl” post myself, and never seeing it show up; maybe I pushed the wrong button. As I recall I gestured at the long and pretty unsuccessful battle that Blizzard has waged against WoW bots, and argued that given that they have an easier problem to solve and still haven’t succeeded, it’s not too surprising that it’s hard to come up with a working scheme for SL.)

    Prokofy Neva’s done the same kind of thing: proposing a means of content protection that is not in fact technically possible, and then when people point out that it’s not possible declaring (profanely) that he’s being persecuted by people who don’t want any copy protection at all. Not helpful!

    I think part of the problem here is that, unless you’ve looked into the problem in some depth, it doesn’t look an harder than technical problems that get solved all the time. We can make avatars fly, surely we can keep people from getting into SL with unauthorized clients! It just turns out, for relatively complicated reasons, that the latter problem is much (much (much)) harder than the former…

  • Dale Innis Says:

    Small note on: “Im not a cryptography expert, so I cant even begin to guess how long it would take to break the encryption”. You wouldn’t actually have to break the encryption in the usual sense. In this scenario, you’d have not only the ciphertext, but also a binary containing the key and the decryption algorithm (or, in more complex implmentations, the key-establishment data stream, and a binary containing the key-establishment and decryption algorithms). This makes the task of getting access to the cleartext a matter more of reverse-engineering than cipher-breaking, and (assuming the cipher is nontrivial) that’s a much easier task.

    Generally it’s reasonable to base a protection scheme on someone not being able to break a strong cipher. But it’s much less reasonable to assume that people won’t be able to reverse-engineer an executable. Happens all the time. :)

  • Gwyneth Llewelyn Says:

    There were two reasons for having retired my proposal (or “giving up” if you prefer). The first one has to do with the quality of the opinions, not the quantity. I seldom regard “opinion-making” as something that should be tied to the “wisdom of the crowds” or the “number of votes”. Both are popular methods to estimate a trend of overall satisfaction with an idea or concept, but for me, much more important than that, are the qualified opinions comments made by people who have studied thoroughly an issue, from a technical (and not emotional!) point of view, and not because they “have an opinion”. And it was the amount of qualified opinion-makers and their comments that ultimately made me give up on the proposal.

    The second issue is purely technical. Although my suggestion (as Jacek somehow implies) did not require a manual review of the code (which, indeed, would take months given the amount of code in the SL viewer), but a more simple certification process of the developers, tied to a key exchange between LL, the developer, and the user with a “certified client” (which would have a certification issued based on a checksum of the binary; this can be done online and would take an amount of time equal to uploading a 50 or 60 MByte file which takes several minutes or an hour, depending on the developers’ upstreaming bandwidth but not more than that: the key idea was an automatic process).

    This three-party method can, however, be reasonably easily forged or intercepted assuming one has the source code for the bits doing the validation. My assumptions were the following:

    - the developer gets certified by Linden Lab. This means that there is a special relationship between LL and that particular developer. That’s not an extraordinary requirement. For instance, testing the Open Grid Protocol with LL was limited to a number of people who had to provide minimal data to LL; being part of the Registration API network requires a bit of effort (and LL manually activating certain capabilities on your avatar), but it’s also not very hard to do; whereas enrolling for the Risk API is, indeed, not easy. Age validation can go from purely automatic to manual faxing of documents to LL. So, overall, LL is already used to deal with “enrollment procedures” for special kinds of developers this would be just a new one. Residents eager to join “special services” are also reasonably used to a few extra procedures.

    “Certification” in this context meant mostly access to a LL-hosted site that accepts, in a trusted form, the avatar login from a certified developer, and gives an upload option (of the SL client binary), emitting a digitally-signed certificate for it, which is tied to that specific client’s checksum

    - developer now distributes the certified binary with the digital certificate

    - user downloads both, and logs in to SL. At this point, the SL client presents its credentials (the signed certificate) and is checked for validity. If it’s valid, SL will display only content that is flagged as “anybody can view it”; the rest will be invisible (either per parcel, or universally, etc. as per my many suggestions)

    So where are the major flaws of this method?

    The first problem is how the client sends its checksum (so that on LL’s side, they can check if it’s the same as in the certificate presented to it). As Jacek and others pointed out, there is no way to avoid that a malicious client doesn’t present a fake checksum with a valid certificate. Granted, once LL finds out about that particular certificate, it could revoke it immediately, but Jacek is right: revoking the certificates for LL’s own clients (which would be the prime candidates) would be a mess.

    The alternative would, of course, be a release of a closed-source application running on your computer (effectively, a DRM agent) that validates the checksum for the client. This is basically what many applications out there (including Windows, but also almost everything from Adobe or Autodesk) do. Naturally, this would mean that LL would have to develop a DRM agent for all platforms and operating systems, existing and future ones (i.e. if someone ports the SL client for the iPhone, LL would have to develop a DRM agent for the iPhone too). It’s unmanageable. Also, like everybody knows, it can be compromised, too.

    I was stumped to find a solution for this issue, and that was the technical argument for not pursuing my idea any further. Granted, you could place the burden on the side of the developer: each developer would create their own DRM mechanism, and hold out “license keys” for each connection. This would work the following way: when logging in, the SL client would contact a developers’ web service and get a license key, signed with a certificate emitted by LL to the developer. Then it would present that license key to LL’s grid servers, who would accept it if they could validate the developers’ signature.

    The way how each developer emits licenses would be up to them. They could use a checksum thingy embedded in part of the code that would not be open source, but it would be up to them. LL would simply accept signed license keys, and that would be all. It would be up to the developer to make sure that their license keys were not abused. So it’s really pushing the blame elsewhere. The advantage? Each binary would not be required to be uploaded for validation by LL (they would simply trust the signed license key) and development would never be “stifled” that way. Naturally enough, LL would provide, with their own SL clients, the appropriate DRM agent.

    The weak spot would then be on the developers’ side figuring out a way to prevent their system to be hacked or compromised by a malicious user; because on LL’s side, once a developer starts handing out licenses for malicious use, they’d simply revoke that developer’s certificate, and none of those malicious clients would work.

    Pushing the DRM issue on top of open source developers is, however, a hard strain on them :) For the libertarians among them, this would be swallowing a very bitter pill :) And, yes, I also don’t see a way how you could release open sourced code including the DRM code without having it easily subverted by a third party.

    The second problem (interception) is, however, impossible to deal with. Due to the Analogue Hole with all digital content, tools like GL Interceptor will always work (even if the cache & stream contents were encrypted, which they’re not). At the very least, textures will always be easy to copy (although once you just have mesh data on the VRAM, I don’t see how you could extract “prim data” from it, meaning that at least objects would be safe).

    There are no “technical flaws” with a DRM system (assuming that nobody’s claiming that it’s 100% safe, because it will never be); there is, however, an ideological flaw for some developers, and while I’m fine in discussing technical issues, I’m not interesting to discuss the merits of a proposal that gets a part of the development community itchy because of their ideology, principles, and moral stance regarding DRM. It would be not only unwise, but even unfair, to stubbornly push an issue because of that it would be like “demanding” that Richard Stallman endorses DRM on the FSF projects because some of them are being used for illegitimate or even illegal purposes. It would be completely worthless a fight not worth fighting, or rather, not even worth thinking about it. DRM is a dirty word in the open source community, and thus any proposal that mentions it would ultimately fail. That might be the best reason why LL has never ever suggested it in public.

    So, I’ll take Jacek’s advice and focus on content theft detection and effective social enforcement instead of “magic tools” to make content theft very hard but ultimately never impossible. And like McCabe so well put it, fortunately, the number of CopyBot sellers and related sites are small, mostly due to the ethical conscience of most developers, who want nothing to do with content theft.

    BTW, Dale, my apologies if your comment was deleted by mistake on my blog. Sometimes they go into the spam queue for no particular reason and I press the “delete” button too quickly before realising that there were a valid comments there :( I’m sorry! It doesn’t happen often, but sometimes it does.