Shop Mobile More Submit  Join Login
As some of you may be aware, there's been a number of steps taken lately to enhance security here at deviantART. Some of this has been prompted by the growth of deviantART as a company: the checks and measures that work for a small company start to be outgrown; and some have been in response to community concerns in recent weeks.

Some of the noticeable changes have included rolling out increased use of secure pages (SSL) across the site, the new grace-period on account deactivations, and as of last week, the requirement to confirm your password when changing your email.

What you won't have seen is the behind-the-scenes and lower-profile work that has also been happening, such as the strengthened password requirements that went live two weeks ago and ensuring that the tools that touch your data leave an adequate audit trail. (Exciting stuff!)

Security is an ongoing process, which means that there's more changes coming. In the coming days and weeks we will be making changes "under the hood" that will automatically log you out. This is a side effect of us transitioning to a technology with enhanced internal security: time moves ever on, and while the state-of-the-art from a few years ago may not be insecure today, it's still good to keep up with standards as they improve.

In the coming weeks we will also be soft-launching a reminder on your profile page if your password passed our old security requirements, but has failed the recently enhanced ones. You will not at this time be forced into changing your password, but we recommend that you take the reminder seriously if you see it, and choose a new password with careful thought and the help of a guide like this one.

What Can Be Done About 3 Gotchas in IE9 Beta

I just spent the past couple of days porting and debugging DeviantArt muro in Internet Explorer 9 beta.  Microsoft announced with much fanfare that they included support for <canvas> in IE9.  Unfortunately I took their word at face value and assumed that my existing HTML5 code would seamlessly start working once I changed X-UA-Compatible.  Alas, I instead stared in horror at my application that appeared to be possessed by some insane daemon.

I remember taking a C class when I was 14 years old, and the teacher went on about how great C was because it was portable.  I spent a week doing my first assignment on my PC in Turbo C++, and then showed up at the computer lab full of NeXT workstations the morning the assignment was due expecting it to just work.  One would think I would have learned from the ensuing traumatic experience, but here I am 20 years later still believing vendors when they say their implementations fit a standard.  In my defense, Chrome, Firefox, Opera, and Safari did an amazing job of coding to the HTML5 spec.  I don’t know why Microsoft couldn’t as well.

The following is a list of several IE9 gotchas that I ran into.  I am sure that there are more - this is only the result of kicking the tires.  It is also just the stuff that I ran into with DeviantArt muro, other applications will care more about other parts of the canvas feature set.


The Problem
IE ignores changes to context.globalCompositeOperation, always behaving like it is set to “source-over.”

Why This Matters
This is the biggest problem I have run into.  A canvas implementation without globalCompositeOperation is like having a salad bar with no lettuce.  There must be a million uses for globalCompositeOperation.  Set it to “destination-out” and you have an eraser.  You can use it to mask out shapes, or combine it with a pattern to create textured lines.  I would hope that Microsoft plans to implement it by the time they make a final release, to claim to have support for canvas without it would truly be an embarrassment.

Test Case

ctx.strokeStyle = 'rgba(255, 0, 0, 1)';                                                                                        
    ctx.lineWidth = 10;
    ctx.globalCompositeOperation = 'source-over';                                                                                  
        ctx.moveTo(0, 0);
        ctx.lineTo(100, 100);
    ctx.strokeStyle = 'rgba(0, 255, 0, 1)';                                                                                        
    ctx.globalCompositeOperation = 'destination-out';      
        ctx.moveTo(0, 100);                                                                                                        
        ctx.lineTo(100, 0);

There is not a good workaround for this.

Canvas Resizing

The Problem
When a canvas is resized by changing the style.width or style.height, IE9 clears the canvas and resets the context.  Note that style.width is not the same as the width attribute of the canvas.  Having <canvas width=”50” height=”50” style=”width: 100px; height:100px”></canvas> would be equivalent to having a 50x50 pixel image that you stretch to 100x100px in a browser.  All browsers reset the canvas when you change the width or height attribute, but only IE resets the canvas when style.width or style.height is changed.

Why This Matters
Applications can zoom in and out of certain areas of a canvas by leaving the drawing as is and changing the style.width and/or style.height.

Test Case

// Start with a canvas that is 100x100px
ctx.strokeStyle = 'rgba(255, 0, 0, 1)';                                                                                        
ctx.lineWidth = 10;
    ctx.moveTo(0, 0);
    ctx.lineTo(100, 100);
ctx.strokeStyle = 'rgba(0, 255, 0, 1)';                                                                                        
    ctx.moveTo(0, 100);                                                                                                        
    ctx.lineTo(100, 0);

Grab a copy of all the data in your canvas before you change its size, and paste it after you are done resizing.  All context settings must also be saved and reset.  We would have to change our test case code to:

// ... snip
var tmpData = ctx.getImageData(0, 0, 100, 100);     
ctx.putImageData(tmpData, 0, 0);
ctx.lineWidth = 10;
ctx.strokeStyle = 'rgba(0, 255, 0, 1)';
// snip ...

Limited Shadow Offset

The Problem
IE9 places an arbitrary limit on how high you can set shadow offsets using shadowOffsetX and shadowOffsetY.  Brief testing shows that the limit seems to be dependent on various random factors.  I have not yet reverse engineered the algorithm for how the limit is determined, but so far I have usually seen it to be a couple thousand pixels.

Why This Matters
I am sure that many people reading this think that I am complaining about an inconsequential implementation detail, but it actually does matter.  For all of the great things that canvas has to offer, it lacks in its ability to create soft lines.  Fortunately though, it can do a lot of fancy stuff with shadows, so you can draw with soft lines by drawing out of the canvas’ viewport and casting a shadow over to where you need the soft lines.  If you plan to make complex drawings on a large canvas, and you do not want to worry about your fake lines that are casting shadows coming into view when you pan and zoom, it is helpful to be able to set the shadow offset to a really large number.

Test Case

ctx.lineWidth = 10;                                                                                                            
ctx.shadowColor = 'rgba(255, 0, 0, 1)';
ctx.shadowBlur = 40;
ctx.shadowOffsetX = 10000;
ctx.shadowOffsetY = 0;                                                                                                         
   ctx.moveTo(-10000, 0);                                                                                                     
   ctx.lineTo(-9900, 100);

You can use smaller versions of shadowOffset (though until the algorithm for how the limit is determined is discovered, you will never know for certain if you are safe).  At times you might have to change the offset and split up your strokes to make sure that things that are supposed to remain offscreen stay offscreen.

Please see this blog article for further discussion between the author and a Microsoft Technical Evangelist:…

We've found a simple method for creating a circular buffer using a normal MySQL table. This technique is obvious once you've seen it, and I'd be surprised if it hasn't been done before.

Why Would You Want a Circular Log?

Say you want to log messages, but you don't need to keep old ones. If you were logging to files, you could use a log rotation program. But what if you're logging to a database?

Couldn't you just regularly truncate the table? Well, that's what we tried at first. But when someone wanted to see a message from 22:00 the night before, and the truncation had run at midnight, they were out of luck. What we wanted was a way to keep at least 24 hours worth of entries at all times.

Features of the Circular Log
  • Each log entry requires only a single SQL statement.
  • The maximum number of rows in the table can be strictly controlled (and resized).
  • It's fast.
  • There's no maintenance required.

Rolling Your Own

First, create the log table.

CREATE TABLE circular_log_table (
    payload VARCHAR(255),
    INDEX (timestamp)

Next, decide on the number of rows you'd like to retain. We'll call that number MAX_CIRCULAR_LOG_ROWS.

Finally, to add new rows:

REPLACE INTO circular_log_table
              FROM circular_log_table AS t),
    payload = 'I like turtles.'

That's it.

The payload column is here as an example. Any number of additional columns of any type should work, as long as they're all set in the REPLACE statement.

How Does it Work?

If you've used Linux, you're probably familiar with one circular log: the kernel's ring buffer, accessed and controlled through dmesg. The buffer has a fixed size. Once it fills up, it loops back on itself and starts overwriting old messages with new ones. That's essentially what happens with the MySQL log table as well.

Carrying the analogy dangerously far: the modulo of the log_id and the buffer size acts as a pointer to the address (row_id) in the table to write to.

Watching it In Action

Let's say that MAX_CIRCULAR_LOG_ROWS was set to 100. When there's no rows in the table, the subselect will give us 1 for the row_id (COALESCE(MAX(log_id), 0) % 100 + 1 = COALESCE(NULL,0) % 100 + 1 = 0 % 100 + 1 = 1). This means that the first row inserted will have log_id = 1, row_id = 1. So far so good.

When it's time to insert the second row, MAX(log_id) will evaluate to 1 (since we haven't yet inserted the second row) and so the row_id will be 2, which again matches the log_id of the row upon insert (log_id = 2, row_id = 2).

This proceeds as expected up until 100 rows have been inserted into the table (log_id = 100, row_id = 100).

On insertion of the 101th row, row_id rotates back to 1. (COALESCE(MAX(log_id), 0) % 100 + 1 = 100 % 100 + 1 = 1) Now, when the row is inserted it, due to the unique constraint on row_id, it will replace the row with row_id = 1 and the new row will have log_id = 101, row_id = 1.

The process continues to repeat itself now thanks to the modulo. At log_id 201 we'll be back to row_id 1, and again at 301, ad infinitum.

Resizing the Log Table

To grow the table, just increase MAX_CIRCULAR_LOG_ROWS. There will be a lag until the row_id reaches the old MAX_CIRCULAR_LOG_ROWS and then it will grow to the new limit.

To shrink the table, decrease MAX_CIRCULAR_LOG_ROWS and then DELETE all rows with log_id < MAX_CIRCULAR_LOG_ROWS. Again, there will be a lag until all entries are continously ordered without gaps. And keep in mind that the DELETE could lock the table and take a while.

Is It Stable?

We've been using this technique for almost 2 years now on a 2,000,000-row table with a dozen columns and multiple composite indexes. The log_id is up to 615,069,600 at the time I write this. The table has accumulated some overhead, but the overhead is still a fraction of either the table's data or index size.

Eventually the log_id column will be exhausted, but even at 10,000 inserts per second it'll take 3.5 billion years.

DWait and Dependencies

Mon Nov 15, 2010, 5:46 AM by kemayo:iconkemayo:
It is a truth universally acknowledged, that a website in possession of much JavaScript, must be in want of a way to reduce HTTP connections.

The more files you include on a page the longer it takes to download everything. Even when all you have is a lot of tiny files like JS, there's still a large limitation in the form of browsers limiting the number of HTTP connections they'll make to a single website at once. This limit is normally 2 connection to a domain. So only two files at once can be downloaded, and there's a certain amount of negotiation overhead when moving to the next file.

Since page rendering is held up by all of the scripts and CSS in the head, that means you really want to have as few files as possible load in the head. Otherwise your viewers are left watching a blank page for precious fractions of a second while the 20 files in your head are downloaded two at a time.

deviantART has a lot of CSS and JavaScript. I counted right now (ack -G ".js$" -f | wc -l), and we have 560 JavaScript files and 310 CSS files. Not all of them are needed on every page, of course... but our core set of JS that gets loaded everywhere consists of 53 files, and the equivalent CSS is 34 files.

Back when we were a young site we just stuck all the JS into the head, because we didn't know better, and also because we didn't have much JS back then. But then we added more functionality, and we noticed just how slow it was making us. So we set out to develop a way to not suck.

Nowadays we use a system of automatically bundling up our JS and CSS into big files, so a single HTTP connection can fetch them all at once. So the 34 CSS files I mentioned become this big file. In the case of the JS it's a bit more complicated, and we also minify all of the files using the YUI compressor, resulting in something like this.

We define all of these bundles in "list files". These are simple text files listing the other files that should be combined. So v6core.js.list contains a bunch of file names, and it gets bundled together as v6core.js.

The bundling occurs in a svn commit hook. So whenever a developer makes a commit that touches a .css or .js file, it triggers a rebuild of the .list file that contains those files. The rebuild happens on our staging server, and the files get copied out to production when we do a release.


Now, because we know this .list system exists, we get to make lots of small JS files that contain single pieces of functionality. These components wind up having dependencies on each other... jQuery is used almost everywhere; lots of code creates modal windows; etc. So now we're faced with the problem of only including the .list files that contain the code we need for the current page.

We used to have to manually declare all of these dependencies in PHP when adding JS/CSS to a page, like so:


This is obviously somewhat unwieldy, and is prone to us forgetting a dependency but having it work because at the moment the missing dependency is in a .list file that's being included anyway. Then breaking later because we rearrange the .list files so that less commonly used code is only loaded when needed.

So now what we do is have some special comments at the top of our JS/CSS files which look a little like this:

/* This is the hypothetical pages/awesome.js
@require jms/lib/difi.js
@require jms/lib/events.js

Then in the PHP we just have to do:

$gWebPage->addModule("jms/pages/awesome.js", MODULE_FOOTER)

...and it'll take care of the rest without us having to think about it. It guarantees that the dependencies (and their dependencies) will be loaded before the requested file. The second argument ("MODULE_FOOTER") is a priority; the caller can say whether they need this JS to be output in the head, the top of the body, or the end of body. This makes sure that the only JS in the head is the JS that really needs to be there.

The dependency mapping is built in the same commit hook that I mentioned earlier, and is serialized out into a file that's loaded when we need to resolve dependencies.

DWait dwhat?

When we were trying to remove as much JS as possible from the head, because it blocks rendering, we encountered the problem of JS in the head that controls behavior on the page. It obviously needs to be there as soon as possible, because otherwise a user who quickly clicks somewhere might see an error, or just have nothing happen. But in the vast majority of cases people won't click really quickly, and if we put it in the head we'll have delayed rendering for nothing.

Our solution to this problem is called DWait. It's way for our JS to request that an action be delayed until a dependency has loaded. This lets us stick a lot of code in the very footer of the page, without worrying about whether some link in the page depends on it.

So you'll see a lot of code like this on dA:

<a onclick="return DWait.readyLink('jms/pages/gruzecontrol/gmframe_gruser.js', this, function () { GMI.query('GMFrame_Gruser', {match: {typeid: 62 }})[0].loadView('submit') } )" href="#" id="blog-submit-link" class="gmbutton2 gmbutton2plus">

This says that the click handler for the link depends on gmframe_gruser.js. If the file is already loaded in a .list then it'll execute the handler immediately. Otherwise it'll remember the click and run the handler as soon as the load has happened.

To detect the loading every bundle file created by our commit hook gets a line of JS added to the end which tells DWait that the individual files within it have been loaded.

There are also a few JS files that have a special command in their header called "@@fastcall". This means that the file is so important to the page that it has to be output directly in the head of the page as an inline script. We cache a minified version of the JS in the dependency map so that this case doesn't involve extra file reads on the webservers.

There's one more trick that DWait has for cutting down load time, and it goes back to the priority argument to addModule that I mentioned earlier. We can tell it to use the priority "MODULE_DOWNLOAD" which means that dependency information is passed to DWait, but that the JS file itself isn't loaded. Instead it waits until a DWait.ready call asks for it and then dynamically loads the file. This is fantastic for rarely used functionality, with the tradeoff being a slight delay when the user first uses it.


These techniques are important for any website, no matter how small. Page load speed has a major effect on how people perceive your site, and there's a lot you can do to to improve it. As a first step, just get as much as possible bundled up and out of the head of your page, and see how much effect it has.

Devastating scrabble play

Wed Nov 3, 2010, 1:27 AM by randomduck:iconrandomduck:
Author: chris
Date: 2010-11-03 00:20:40 -0700 (Wed, 03 Nov 2010)
New Revision: 129415
Log: I am a bit drunk.

[11/3/10 12:42:05 AM] randomduck: bolt, did you declare all those  $secdb you making inserts on?
[11/3/10 12:43:28 AM] chris: thank you
[11/3/10 12:45:10 AM] Pachunka: it's my fault
[11/3/10 12:45:16 AM] Pachunka: my last scrabble play really shook him
[11/3/10 12:45:21 AM] Pachunka: it was devastating

Author: chris
Date: 2010-11-03 00:46:38 -0700 (Wed, 03 Nov 2010)
New Revision: 129417
Log: I am sorry.

Recent Journal Entries

We're Hiring Developers

We're looking for talented web developers to join our team! :la: Interested? Check out…

Journal Writers