• 01Feb

    Scripters (myself included) have long bemoaned the lack of any way to write text to a notecard. You can read text from a notecard (although it’s a PITA due to LL’s non-blocking dataserver lookup; and it’s also apparently not possible with notecards that have any embedded inventory items like landmarks), but there’s no way to write to it. That means you can’t save script settings or data to a notecard so that they persist between script resets.

    Scripters, being naturally clever folk, have instead relied on the fact that you can change an object’s name and description. So, just save the settings/data to the object’s description, and it’ll be there the next time the script runs! This was especially handy because the servers have previously allowed much longer descriptions to be saved than they were supposed to.

    Of course, anyone who has been following the Linden Blog should now be aware that the name/description size limits will now be enforced. I won’t say much about that, except to note that this is another instance of “Very useful exploit-turned-essential-tool that occasionally caused problems, so the Lindens removed the tool rather than fixed the problems”. See also: Megaprims.

    What I’m more interested in right now, is the excuses the Lindens give about why we’re not allowed to write to notecards. For example, Prospero Linden wrote:

    Re: storing persistent data : the way the asset sever works, a UUID is a unique identifier to an asset. If you change anything, it has to be a new assetĖ because if somebody else, say, had the same notecard before it was changed, you donít want your edits to go to this other personís notecard. It can happen that different people with the same object in their inventory in fact just point to the same UUID in the asset server. If you were to be able to write to a notecard from the script, *every* write command would create a new asset, which would create a load of additional problems.

    I call bullshit.

    Assuming the technical description is accurate, it follows that every time you edit a notecard by hand, you must already be creating a new asset. Otherwise, as Prospero pointed out, giving a friend a notecard and then editing your own version of that notecard, would also change your friend’s version. (Which might be cool for collaborative efforts, but would otherwise be annoying.)

    New notecard assets are already being created ’round the clock as Residents write to-do lists, notes for a friend whose IMs get capped, configurations for notecard-reading scripts, and so on. In the time it took me to write that last sentence, I would guess that at least 20 new notecard assets have been created in SL, not to mention scripts, clothing, and more.

    Sure, the asset server has plenty of problems already, but why does Prospero think notecard-writing scripts would lead to additional ones?

    Perhaps he’s worried about the sheer volume of notecard-writes that could occur. Scripts could, presumably, work a lot faster than Residents can. A griefer could perform a Denial-of-Service attack on the asset servers, by sending out self-replicating, notecard-writing objects that would create hundreds of new notecards every second! Egads!

    Well here’s a thought: Put a delay on it.

    You know why people don’t use SL scripts to send out massive amounts of spam email, despite the presence of an llEmail command? Because every call to llEmail pauses the script for 20 full seconds. Rather annoying if you have a legitimate reason to send emails from a script, but nonetheless an effective way of discouraging abuse.

    Even a notecard-writing function with a 60 second delay would be infinitely more useful than having nothing.

    And that’s all assuming Prospero (and other Lindens who have made similar statements) are right about how notecard-writing works. It seems like a rather silly system, of course, but it certainly wouldn’t be the first silly system to come out of Linden Lab. A more sensible system would, one might think, involve making a new asset only when one was required — that is, when a new or copied asset is created.

    Such a natural and obvious solution would also eliminate any reason for the notecard-writing function to create a new asset every time it’s used. The prim with the script in it would already have a notecard; when that notecard was copied to the prim, a new asset would be created one time. All subsequent writes to that notecard would modify the same asset. If you wanted to copy the notecard out of the prim, that would create one more asset. It’s really quite simple!

    On the other hand, it wouldn’t be as useful for testing if it’s actually possible to run out of these so-called “Universally Unique IDs”. ;)


    Posted by Jacek Antonelli @ 9:51 pm

    Tags:

9 Responses

WP_Cloudy
  • Nexeus Fatale Says:

    Actually, some of the other problems that were caused by longer than needed description names was that objects act up (see: http://jira.secondlife.com/browse/SVC-1118).

    But as for the notecard-writing function-delay – I would have to disagree with you that it would be the best solution. The delay, in my estimation, only would cause more issues to adding data to a notecard because there is already a delay. Not only would you be stacking delays on top of delays, but you have this possibility for making scripts become backed up with requests, crashing, or breaking entirely.

    Maybe the best solution is the use of objects or note cards inside of a scripted object.

  • Jacek Says:

    @Nexeus: I certainly don’t think a delay on the function would be the *best* solution, merely that it would be *a* solution, and one that might not agitate the asset servers too much. The delay would be annoying, but at least it would allow a little bit of functionality (e.g. backing up script settings before resetting).

    I’d much prefer a function with no delay over a function with a long delay; but I’d prefer a function with a long delay over having no function at all.

  • Erbo Evans Says:

    Well, why couldn’t it be a sort of stream-oriented thing? Have one call which opens a “notecard stream” which could be written to, and have the streamed data buffered in memory. Another call would close the stream, and, at that point, save changes to the actual notecard, creating the new asset. In other words, it would act similar to the way editing notecards works in the client now. I should expound upon this in a post of my own…

  • Notecard Writing: A Modest Proposal « Evans Avenue Exit Says:

    [...] Writing: A Modest Proposal Jump to Comments Jacek laments the lack of ability of scripts to write to notecards, thus depriving them of a potentially-useful [...]

  • Erbo Evans Says:
  • Prospero Linden Says:

    It’s not just the volume of notecard writes, it’s also a matter of garbage collection.

    Every time you edit a notecard by hand and save your version, if nobody else has a copy of the previous version, the previous version becomes “garbage”. That is, something on the asset server to which there is no reference. How do you figure out if there is no reference? That process is called garbage collection. Because the asset server is always filling up, we go through and collect the garbage every so often, to free up the space used by assets that aren’t referred to anywhere.

    The ability to create garbage notecards is right now limited by the speed of human thought and typing. Even with a delay, you can bet that some script writers would regularly be updating state in notecards (daily, hourly, whatever). Consider also that scripts tend to be in-world 24/7 (unlike, at least, most avatars), and that the number of scripts (even non-trivial scripts) in a given region usually dwarfs the number of simultaneous avatars that region can even support. This would lead to an explosion of garbage. Even if it didn’t overwhelm the asset server’s space requirements, it would overwhelm our garbage collection procedures.

    At which point you may wonder why we don’t keep a reference count with each asset, such as (say) languages like Java do with their objects. Well, yes, that’s an elegant solution, but bear in mind the scale of the thing. The asset server is many tens of terrabyltes (maybe over a hundred; I don’t remember the number off of the top of my head). Tens of thousands of people are creating assets all the time, and need to be accessing those assets all the time. This leads to design issues that are entirely out of the realm of design issues that you think about for a programming language and memory management on a single system. Doing all of this is **hard**. If it weren’t, then there would be lots of companies, not just Linden Lab, that had a virtual world with the scripting and content-creation flexibility on the scale of Second Life. When you really start to think deeply about the implications of script-writeable notecard (and, believe me, when I was first in Second Life in my pre-Linden days, i was fully boggled that you couldn’t just do that), the issues become absolutely as thorny as the “company line” implies, despite the dismissal of those issues in missives such as your blog post here. The asset server is designed to be “WORM” (write once read many), because that’s the way the vast majority of assets work. (Consider texures; they are uploaded once, but downloaded a lot, as people come into a region and need the texture to see what it looks like.)

    At Linden Lab, we are well aware that scripters want/need some way of having persistent data associated with objects and scripts. It’s not clear that writeable notecards are the right way to do that. Most obvious things you can come up with will have non-obvious wrinkles and issues associated with them that are unique to the nature of having a single large global virtual world like Second Life. Currently, those working on scripting are focusing on making Mono work right; you can go into Aditi (the beta server) and check out regions that are Mono enabled. We don’t want to make big changes to the LSL functions and capabilities while trying to make sure that Second Life with Mono properly runs LSL; that moving target would make it much harder to get Mono stable. After that, obviously I can make no promises, but some sort of persistent store *is* something many of us would like to see. Will it happen in the next year? Who knows?

    -Prospero Linden (who came here looking for a way to do character animation in Blender, but found this….)

  • Nacon Says:

    Sounds like you have been waiting for Mono’s extended script memories. (64kb)
    ;)

  • Chavo Raven Says:

    This could all be resolved if they would just provide basic database access… and the asset server issue could be worked around using buffers reading and writing. The garbage collection shouldn’t really be an issue either if you’re buffering the data and not doing byte by byte writes… even more efficient if they ever manage to release Mono. Of course, it would make the most sense to simply setup some secondary MySQL servers and add some basic functionality for that, which wouldn’t be that hard to include and has been asked for for years, just like object-to-object IMs have been asked for since at least 2004 that I know of (allowing us to not have to depend on llEmail crap).

  • Jeddin Laval Says:

    Forty years in this monkeybusiness field have forced some clues on me, among them this evident problem: Scaling up any system inevitably crosses successive thresholds of dynamics, and the system will just as inevitably break. As much as I hate to admit it, Prospero’s discussion explains it well, and although I’d love to script out piles of notecards to all and sundry, I can see down the rathole, and it’s full of rats.

    Second Life is in the business of scaling up, and being deeply cautious about enabling scripted creation of content isn’t just a good idea — it’s a good way to avoid suicide. Scale up very slowly. The next unanticipated dynamic threshold in this amazing world will find you soon enough.

    (8-]