Shop More Submit  Join Login

We're all remote

Mon Mar 21, 2011, 10:09 AM by kemayo:iconkemayo:
Many companies allow you to work from home, either full-time or a few days a week. Sometimes it's a privilege, granted to the top performers. Unfortunately this often leads to a situation where the "core team" is in a central office, and the remote developers are marginalized because they're not visible. In that case choosing to use your remote working benefits can be very bad for your career.

For us remote is just the way we work. The deviantART hq is in Los Angeles, and no developers live close enough to visit casually. I think the closest developer is in San Francisco. The nearest developer to me is about 250 miles away. Excepting the search team (who are based in Vancouver), to the best of my knowledge no two developers even live in the same city.

Now, this does come at some cost to us! We have to make an effort to keep our intra-team communication working, whereas a company with all their employees sitting next to each other gets a certain baseline for free.

How do we work?

We use Skype. A lot. The entire company communicates through a number of Skype chat rooms. Each team gets a chat, some topics get a chat, and all projects wind up with one. There's a trac install in use for issue tracking and wiki. Some teams prefer using Basecamp or other tools instead of trac, and they're free to do so.

As an earlier article mentioned all our scattered developers get a virtual machine running a functioning copy of deviantART to develop on, so they're not tied to development servers in a central office.

Agile development

We have a modified agile system in use, which has evolved over time to meet our needs, and works reasonably well for us. We tried standard agile methodologies, and found they were a bit awkward with a remote team. Pair programming or a scrum, for instance, just aren't the same without colocation.

Our system revolves around weekly iterations, of which we are on our 174th since starting this system.

Each week on Monday our teams demo their progress to each other and to their internal customers on one big demo call that everyone attends. The demos are short-and-sweet; 3-6 minutes is standard. The general format is "our expectations for this week were to do X; we did that / didn't do that; here's supporting information".

After the big demo we break off into team-specific calls where the lead developers and their customers work out the next iteration's expectations. Some teams will decide to iterate at a different rate for one reason or another; it's not uncommon to discover that you need to make a quick prototype or view the analytics on a quick change to decide what to do next.

Throughout the iteration it's up to the developers on a given project to decide how they communicate. Some projects wind up doing daily check-in Skype calls, others work entirely in text chats. Since we have developers in widely disparate time zones it's again up to the developers on a project to work out amongst themselves times when they're all available to exchange information; there's no standard "office hours" required by the company. (Though people do gravitate towards Pacific time, it seems...)

Project teams

A normal project consists of a customer, a lead developer, and 0-2 other developers. If it involves changing the site's UI then a member of the UI team is assigned to work with the project, because UI is hard. Similarly if it involves changes to our infrastructure then a member of the Tech Ops team may be attached.

"Lead developer" is a term which can mean almost anything in this industry. To us it means the developer whose job is to coordinate the work of other developers on the team, and to work out with the customer what the project's expectations are. The lead is still an active coder on the project; we've yet to have a project with enough overhead that a pure manager is required.

The "customer" can be almost anyone. We pick someone in the company who we think can represent the needs of the project. So on a purely technical project it might be someone in dt, for a new feature it might be the product lead who championed the feature, for changes to our print store it'd be someone in retail, etc.

Team size is flexible, and depends on the project. Some teams do wind up with just a lead developer, if the expected workload is low. However, we prefer to put more than that on a team to make sure that several people will go over the code produced.

Reactor

We do have one long-running project team called Reactor. Its role is to fix bugs that aren't related to other active projects, and to implement features that aren't large enough to warrant spinning up a separate project. It's also where we assign all new hires initially, so its lead developer gets to mentor them and introduce them to our codebase. Reactor makes a good training ground since it's guaranteed to drag developers through disparate areas of deviantART.

Face time

Working remotely works well for us, but we like to supplement it with occasional face-to-face meetings to help make sure that everyone knows everyone else. When collaborating over Skype it's easy to just never interact with someone whose work isn't related to your area, after all.

Thus every year for the holidays deviantART flies everyone in to hq for our holiday party. We spend a week working a little and socializing a lot, to cement our sense of "team".

Particularly complicated projects may also involve getting everyone involved together in one place while the general shape of the project-to-come is hammered out. For instance, before the groups project launched in late 2008, the whole team spent a week in Canada debating what groups should be, and wrote the first prototype of the new profile page widget system while they were up there.

Better ways

A pseudo-regular activity in our off-topic developer chat is trying to find a better alternative to Skype for the text-chat portions of our communications. (Skype chat is great if you like the Skype client; if you don't then you're stuck with it anyway.) We've run through a number of options, but it's tough finding one which is:

  • non-technical enough for our non-developers
  • enough better than Skype to make it worth the hassle of switching everyone over


IRC is a common contender, but tends to fail on the latter point.

How have similar systems worked for you? Do you have a better pure-remote development methodology that we've not heard of?

I recently worked on a project that tried to improve the performance of deviantART muro.  As part of this project I did a lot of testing and benchmarking of various HTML5 operations.  I learned a lot about where one must be careful when writing web applications that use HTML5’s <canvas> element.  The following is a diary of sorts that I made while working on the project.  Of course your milage may vary depending on the setup of your application, but as you will see, deviantART muro realized significant performance improvements when I applied the lessons I learned.  

Conclusions

For those who don't want to read this whole long and rambling article, here are the main rules to live by as suggested by my testing:

:bulletblue: Reduce direct pixel manipulation as much as possible.  Use the line drawing API when possible, and when you must sample pixels get as few as possible.  

:bulletblue: Rendering shadows, especially ones with a high blur, will greatly reduce your performance.  You can draw quite a few shadowless lines in the time it takes to draw a single one with a soft shadow.

:bulletblue: Bundle as many lines as you can into a single call to stroke() as possible.


The Setup

Since I was interested in a scenario similar to what deviantART muro will often see, I ran all tests on a canvas that was 1200px wide and 500px tall.  This would be the approximate size of the drawing area if a user with an average sized monitor maximized their browser window.  All tests were run on my laptop (2009 Macbook Pro with 3.06GHz Intel Core 2 Duo Processor, 8GB RAM, and Intel X25-M Harddrive) with minimal other apps running (Terminal, vi, and standard background tasks).  The browsers that I tested on were: Firefox 3.6.13, Safari 5.0.3 (6533.19.4), and Google Chrome 8.0.552.237.  Late in the game a colleague asked me how Firefox 4 beta compared, so I re-ran some of the tests using Firefox 4.0b10.  

Some people will surely ask how the Internet Explorer 9 release candidate stacks up with the rest of the browsers.  I apologize that I did not run the tests using IE9 because I did not have a windows machine handy, and did not feel it would be accurate to run these tests on a virtual machine.

For all the tests I would time how long it took to run a small section of code a bunch of times, and then subtract the time needed to run the same code without the critical bit I was testing.  This would mean that little costs of doing things to prepare the test would not contribute to the time measured by the test itself.  The results shown here are averages of running the tests several hundred times each.  Though I did not calculate standard deviations for each result, I kept an eye on the time distributions to make sure that they did not change too much from test to test.  

These tests were meant to just give me a ballpark idea of what is important and what is not, they were not intended to be official benchmarks or the basis of an academic paper analyzing the algorithms various browsers are using, so please take the results with that grain of salt.  A browser's javascript performance depends on a wide variety of factors that were not part of these tests.  To judge a browser based on their results would be mis-guided.


Drawing Lines

The first thing I tested was basic line drawing: moveTo() a random location on the canvas, lineTo() a different random location.  I did the test once using a single fully opaque color, and again using various random colors and opacities.  This was meant to give me an idea for how much penalty one pays for making a canvas calculate more intricate blending.  I also did the same tests using quadraticCurveTo() and bezierCurveTo() so I could see how much more expensive it is to draw with smooth lines.  Of course, if you are using one of those functions in your app you will also have the overhead of having to calculate the proper control points to use.



There is not much that is surprising here.  The four browsers that were tested performed pretty similarly.  Using more mathematically complex curves comes at a cost.

Next I wanted to see if it makes a difference how often one calls stroke() when drawing with lines.  I ran tests where I compared calling stroke after each line segment was drawn and where I drew a number of line segments and then stroked them all at once.  As can be seen by the approximately linear graph, stroke() takes close to the same amount of time each time it is called.  If you can draw 50 line segments before calling stroke(), you can save in the neighborhood of 20% of the cost of drawing.




Shadows

Shadows are a really useful tool, not only for drawing actual shadows, but for anything that needs a nice soft edge.  However, that soft edge comes at a really high price.  For a while now, deviantART has realized that WebKit browsers struggle when we use a lot of shadows.  I was really curious to get a handle on just what was going on that made Gecko and WebKit browsers behave so differently.  When one times how long it takes to draw straight lines with various amounts of shadowBlur, a really interesting graph appears.  WebKit browsers can draw small shadows quickly, but as shadowBlur increases their rendering time increases slightly worse than linearly.  Firefox, on the other hand, renders shadows at near constant speed.  If the shadows do not have much blur it is slower than the WebKit browsers, but when shadowBlur gets up to 100 it is four times faster.

Interestingly, Firefox 4 beta now has performance closer to that of the WebKit browsers.  The shadows of the same blur in Firefox 4 are also a lot wider than they were previously (Firefox 3 has always had smaller shadows than WebKit).  I do not know the details, but it would seem that the canvas spec must be settling on a softer, but more computationally intensive shadow as its reference.




Buffer Copying

Earlier profiling that I did showed that the worst performance issues in deviantART muro came from having to move buffers around at an inopportune time.  Any complex graphics app is going to have to store and/or move image data around, and I was really curious about what the best way to do this is.

There are a number of different ways to get at the data that is on a canvas.  One can use drawImage() to copy the contents of one canvas to another canvas.  You can get the contents of a canvas as a base64 encoded PNG by using toDataUrl().  You can also get essentially an array of pixels using getImageData().  As can be seen below, the toDataUrl() method is the clear loser; apparently the cost of encoding the data is pretty high.  Which of the other two methods to use is a little less clear until Firefox 4 usage is widespread.  As can be seen, Firefox 3 has some problems getting and putting image data quickly, but WebKit browsers are much faster at that than using another canvas element as a buffer.



An advantage of using getImageData() is that it can sample a portion of the canvas.  I did the same getImageData() test, but sampled squares of increasingly larger size, and for all four browsers tested, getImageData() had close to constant speed per pixel sampled.  Before I had thought that the overhead of getting any pixels would be large, so I would sometimes sample more than I needed if I thought there was information that I would be needing at a later point.  As this graph shows though, it is better to grab only what you need, because you do not pay a noticeable penalty for sampling a second time down the road.






Applying to dA muro

While all this data is somewhat interesting, one must wonder how much the knowledge helps in performance tuning a web application.  Of course everybody’s milage will vary on this, I am sure that there are programmers out there who have a much better intuition for speed optimizations than I do.  They would have written faster code right than I from the get go.  However, I think that the code I started from was probably fairly average as to what one might expect from an experienced coder who was fairly new to HTML5 and used all of the available API’s in interest of making simple and straightforward code in preference over premature optimizations.

The main lessons I learned is that any kind of getImageData() call or canvas copy should be avoided at all costs,  delayed until a “down time” if they cannot be avoided, and if all else fails great care should be used to sample only the pixels that you absolutely need.  It is alright to call lineTo() many more times if it means you can avoid a call to getImageData().

The first place that I tried to optimize was measured by a test that simulates a user making a bunch of short strokes relatively close to one another.  An artist would typically do this if they were cross hatching, stippling, or applying a Van Gogh-esque texture to their drawing.  A lot of the changes that I made are particular to the internals of deviantART muro, and a description would not make sense to somebody unfamiliar with our codebase.  However, I will describe two of the optimizations.  When a user draws a line, the new line needs to be pushed into an undo buffer, and it also needs to be reflected in the zoom “navigator” panel that is in the corner of a screen.  These two tasks can not avoid some kind of buffer copying, but smarter buffer copying led to some noticeable speed improvements as can be seen below.  Note that Firefox 4 is not shown in the first three sets of results because I did not start testing it until later (and I did not feel like re-coding all my inefficiencies just to see how much it improved).



The next place I turned my attention was individual brushes that were taking longer than they needed to.  In most cases this came from unnecessary calls to clearRect() or fillRect() (these calls perform similarly to putImageData() calls).  The bulk of deviantART muro’s brushes are now quite a bit faster.  Once again, Firefox 4 is not in these results because I did not benchmark it at the beginning of the project.




Web Workers

Next I looked at the filters in deviantART muro.  To give context to why filters are slow, one must understand that an imageData object consists mostly of an array of pixel values with R, G, B, and A values stored separately.  Thus, the imageData array has width*height*4 elements.  Most filters need to look at the data surrounding a pixel in order to determine the new value for a pixel.  Let’s say that the filter looks in a radius of 3 pixels (so a square of 7 pixels to  a side), that means it needs to know the value of 196 array entries in order to color a single pixel.  Javascript array lookups are not particularly fast, so applying a filter to a large canvas can be painfully slow.

I figured that the good news about the filter problem was that it is something that can be easily parallelized. Most modern computers have at least 2 processor cores, so it is a shame to leave one of those idle while a single browser UI thread is churning away.  So, I prototyped a change that split the canvas into several chunks and passed the filtering off to some web worker threads.

The first problem I ran into is that web workers do not have a concept of shared memory.  Data passed to and from them must go through calls to postMessage().  An article on the Mozilla blog indicated that internally these messages are passed as JSON strings.  This is a problem for a task like filters that are operating quickly on a very large data set.  The cost of JSON encoding is not small compared to the cost of the actual computation.  Note also that in WebKit browsers you cannot assign an array reference into an imageData’s data, so you have to pay the penalty of doing a memory copy from the JSON decoded array into an imageData object.

The results of my experiment were mixed.  Firefox 3 was quite slow before the change, and sped up by a factor of 3 when it was parallelized.  Safari, on the other hand, spent a long time churning before the threads even started executing (I cannot be sure, but I suspect that this was while the JSON encoding was happening), and then for some unknown reason the multiple threads each took a lot longer than the single UI thread.  Chrome’s threads ran very quickly at first, but then it sat for a little while before returning the data to the UI thread.

I spent a little bit of time trying to debug these issues, but eventually gave up.  From my experiences I would say that web workers are a cool technology that will be really useful someday, but at the moment some browsers are not quite ready for this particular use case.

Below you can see the CPU utilization of the two cores of my machine when the filtering code is running in a single thread vs when it is running in web worker threads.



This article assumes that you are familiar with the concepts of MVC programming in the context of web development. If you are not, wikipedia has a decent overview in their entry about it.

The problem

An issue that MVC doesn’t address directly is how to efficiently group data access calls together across scopes. By “grouping data access calls”, I mean going from something like this:





To something like that:



When following MVC strictly, data access must happen in controllers. If two controllers need similar data from the same database table, in most MVC implementations the separation of code between controllers won’t even make it possible to group these similar data needs together.

Better MVC implementations manage to identify these common data needs across controllers and group the calls. For instance a meta-controller could iterate through all the controllers of a page and ask them what their data needs are. The meta-controller can then efficiently group the data access calls together, reducing the round trips to the data stores.

We already follow that strategy on modularized pages of deviantART, like user profiles, where each widget a user can install and configure (friends list, favourite deviations, featured artwork, etc.) has its own controller. The profile page itself has a meta-controller that coordinates data access for the controllers of all widget instances. If two widgets on a user profile need complimentary data about the same artwork, the call will be combined into a single memcache get or database query.

That strategy is only possible because we know which widgets are on the page very early in the page generation, and consequently, which controllers will be involved. There are many situations where knowing that information in advance is impossible, or requires rewriting the code so much around this specific need that a massive layer of complexity is added for the sake of saving a few round trips to the database. It’s the age-old dilemma of maintainable easy-to-follow code versus less legible code optimized for data efficiency.

Our solution

Faced with this cross-controller data access grouping issue once again, we tried to push the boundaries imposed by MVC with a new approach. This time, instead of letting data access optimizations dictate the structure of our code, we decided that data access would get out of the way and be dealt with as late as possible in the page generation. The idea here is that sometimes you can’t tell what your data needs are until the logic in your controllers has run, or after you’ve already fetched some other data first.

The core of what allows us to perform that trick is something we call “datex”. Datex is a stream of intertwined DOM output and PHP objects. Instead of echoing, the views of our MVC code return datex. Whenever a piece of output is dependent on some data that can be grouped (for instance, basic user data), we “output” a PHP object in the datex stream. That object contains information about the data dependency and which partial view will render the DOM once the data needed is fetched. This way we make the data-dependent parts of our MVC code asynchronous.

As it’s being built, our page’s output looks like the following, a mix of DOM that could be rendered right away, and PHP objects that contain information about partial views that will be rendered later.



Then, once all the MVC code (in the traditional sense of the term) has run, we resolve this datex stream in passes. With each pass we can very easily group similar data access needs together.

Some objects are fully resolved on the first pass and return DOM, others return more datex with new objects in them. These are the situations that were very hard to optimize in more classic MVC implementations, when controllers that have received their initial data then require another round of data access.

After an initial pass, if we still have DOM and objects in the datex stream, we go through the same process again, and once more we can group similar data access together for efficiency. This process can go on for a few passes, until all PHP objects have eventually resolved to pure DOM. When the datex stream only contains DOM, we know that we can output the results, as they are the fully constructed web page.



Going from datex to fully resolved DOM


An example of a real-world usage is thumbnails on deviantART. Each thumbnail requires a memcache get. By grouping all thumbnails output on the page and treating them at once, we get all the data we need for rendering these thumbnails in a single memcache multi_get call.

Of course, that’s possible with template-based MVC, but it forces you to structure all your code around the fact that you need to know exactly which thumbnails will be on the page at the beginning of the page generation. With our system, you don’t need to know where thumbnails will be needed, any code deep inside a view can request a thumbnail at any time, by simply adding a thumbnail object to the datex stream. The system guarantees that if the data access needed for that thumbnail can be grouped with a similar one originating from a completely separate view, far in scope, it will be.

Another advantage this technique has over templates/string replacement is that our PHP objects in datex can have declared dependencies between each other. We can also prioritize them regardless of their order in the DOM. If the ad banner at the top of the page depends on content at the very bottom of the page to be resolved into DOM first, this can be provided automatically, without having to structure the view code any differently than the logical output order of the page. It allows us to bring that logical top-to-bottom DOM order back into our views, where in the past we had the data logic take over and shuffle the order based on data optimizing needs and interdependencies. With datex you can read the views’ code and follow the order of the final output, even when the actual resolution of the objects might happen in a completely different order, optimized for data.

Additional benefits

So far we’ve only scratched the surface of what datex allows us to do. It’s a new way of thinking about building the page output. As such, it requires a new mindset to identify where and what it could be used for.

For a while we’ve bundled AJAX requests that resulted in rapid user actions together, in order to reduce the impact of network lag on these small requests. The backend of these independent AJAX calls can now seamlessly have their data access grouped, and this happens across completely separate scopes. Each AJAX request remains completely independent and isolated, but simply by virtue of happening near the same time, datex allows us to group similar data fetching calls that happen in them. In a similar fashion, one could design a very service-oriented architecture and have the services return datex.

Another very powerful aspect is the ability to prioritize resolutions of some objects over others in the datex stream. When working with asynchronous data stores, one can decide to resolve the objects depending on these asynchronous calls first, then spend time resolving other objects that depend on synchronous data stores; therefore making better use of time that might have been spent waiting for asynchronous data to come back.

Limitations?

One limitation people might see in this technique is that our output can’t be string-manipulated anymore as it’s being constructed. Since datex is a mix of strings and PHP objects, it’s impossible to do preg_replace() on it, for example. I believe that this limitation is in fact a great strength, because when developers start transforming DOM partials with string manipulations in PHP, they’re often trying to solve a frontend CSS/JS problem with backend PHP, or are just being lazy and prefer to do a preg_replace rather than adapt the existing code to support the new need. By forcing developers to avoid these patterns of string manipulation over partial views, we get cleaner code where content-dependent DOM transformation is forced to be in CSS/JS scope or to be rewritten to avoid any string replacement.

Implementation

Since we didn’t want to heavily patch PHP to achieve this, we handle datex with a homegrown PHP library. Most of the functions in our code that produce DOM output thus return datex instead of echoing. We merge the datex streams into each other as the page is constructed, resulting in a tree of datex. At the very end of page processing, we resolve all the objects found in the datex tree in passes until we’re left with pure DOM, and we output it.

It’s possible that this technique already exists in other languages or frameworks; we just couldn’t find anything similar when we came up with the idea. It would certainly be more efficient and lighter in syntax if this technique were supported at the language level.

If you know of techniques or technologies that tackle the same issue of grouping data access calls across scopes elegantly, please share, we’re always interested in comparing our experience with others. And if there isn’t anything like it, we hope that sharing this technique will be useful to others who face the same problems.

Edit: I've provided useful complimentary information in response to questions on Hacker News about this article.

As some of you may be aware, there's been a number of steps taken lately to enhance security here at deviantART. Some of this has been prompted by the growth of deviantART as a company: the checks and measures that work for a small company start to be outgrown; and some have been in response to community concerns in recent weeks.

Some of the noticeable changes have included rolling out increased use of secure pages (SSL) across the site, the new grace-period on account deactivations, and as of last week, the requirement to confirm your password when changing your email.

What you won't have seen is the behind-the-scenes and lower-profile work that has also been happening, such as the strengthened password requirements that went live two weeks ago and ensuring that the tools that touch your data leave an adequate audit trail. (Exciting stuff!)

Security is an ongoing process, which means that there's more changes coming. In the coming days and weeks we will be making changes "under the hood" that will automatically log you out. This is a side effect of us transitioning to a technology with enhanced internal security: time moves ever on, and while the state-of-the-art from a few years ago may not be insecure today, it's still good to keep up with standards as they improve.

In the coming weeks we will also be soft-launching a reminder on your profile page if your password passed our old security requirements, but has failed the recently enhanced ones. You will not at this time be forced into changing your password, but we recommend that you take the reminder seriously if you see it, and choose a new password with careful thought and the help of a guide like this one.

What Can Be Done About 3 Gotchas in IE9 Beta


I just spent the past couple of days porting and debugging deviantART muro in Internet Explorer 9 beta.  Microsoft announced with much fanfare that they included support for <canvas> in IE9.  Unfortunately I took their word at face value and assumed that my existing HTML5 code would seamlessly start working once I changed X-UA-Compatible.  Alas, I instead stared in horror at my application that appeared to be possessed by some insane daemon.

I remember taking a C class when I was 14 years old, and the teacher went on about how great C was because it was portable.  I spent a week doing my first assignment on my PC in Turbo C++, and then showed up at the computer lab full of NeXT workstations the morning the assignment was due expecting it to just work.  One would think I would have learned from the ensuing traumatic experience, but here I am 20 years later still believing vendors when they say their implementations fit a standard.  In my defense, Chrome, Firefox, Opera, and Safari did an amazing job of coding to the HTML5 spec.  I don’t know why Microsoft couldn’t as well.

The following is a list of several IE9 gotchas that I ran into.  I am sure that there are more - this is only the result of kicking the tires.  It is also just the stuff that I ran into with deviantART muro, other applications will care more about other parts of the canvas feature set.


globalCompositeOperation

The Problem
IE ignores changes to context.globalCompositeOperation, always behaving like it is set to “source-over.”

Why This Matters
This is the biggest problem I have run into.  A canvas implementation without globalCompositeOperation is like having a salad bar with no lettuce.  There must be a million uses for globalCompositeOperation.  Set it to “destination-out” and you have an eraser.  You can use it to mask out shapes, or combine it with a pattern to create textured lines.  I would hope that Microsoft plans to implement it by the time they make a final release, to claim to have support for canvas without it would truly be an embarrassment.

Test Case

ctx.strokeStyle = 'rgba(255, 0, 0, 1)';                                                                                        
    ctx.lineWidth = 10;
    ctx.globalCompositeOperation = 'source-over';                                                                                  
            
    ctx.beginPath();                                                                                                               
        ctx.moveTo(0, 0);
        ctx.lineTo(100, 100);
    ctx.stroke();                                                                                                                  
                                                                                                                                   
    ctx.strokeStyle = 'rgba(0, 255, 0, 1)';                                                                                        
    ctx.globalCompositeOperation = 'destination-out';      
                                                                             
    ctx.beginPath();                                                                                                               
        ctx.moveTo(0, 100);                                                                                                        
        ctx.lineTo(100, 0);
    ctx.stroke();



Workaround
There is not a good workaround for this.




Canvas Resizing

The Problem
When a canvas is resized by changing the style.width or style.height, IE9 clears the canvas and resets the context.  Note that style.width is not the same as the width attribute of the canvas.  Having <canvas width=”50” height=”50” style=”width: 100px; height:100px”></canvas> would be equivalent to having a 50x50 pixel image that you stretch to 100x100px in a browser.  All browsers reset the canvas when you change the width or height attribute, but only IE resets the canvas when style.width or style.height is changed.

Why This Matters
Applications can zoom in and out of certain areas of a canvas by leaving the drawing as is and changing the style.width and/or style.height.

Test Case

// Start with a canvas that is 100x100px
ctx.strokeStyle = 'rgba(255, 0, 0, 1)';                                                                                        
ctx.lineWidth = 10;
                
ctx.beginPath();
    ctx.moveTo(0, 0);
    ctx.lineTo(100, 100);
ctx.stroke();
                
jQuery('#testCanv').width(101).height(101);
                    
ctx.strokeStyle = 'rgba(0, 255, 0, 1)';                                                                                        
ctx.beginPath();                                                                                                               
    ctx.moveTo(0, 100);                                                                                                        
    ctx.lineTo(100, 0);
ctx.stroke();



Workaround
Grab a copy of all the data in your canvas before you change its size, and paste it after you are done resizing.  All context settings must also be saved and reset.  We would have to change our test case code to:

// ... snip
ctx.stroke();
           
var tmpData = ctx.getImageData(0, 0, 100, 100);     
jQuery('#testCanv').width(101).height(101);
ctx.putImageData(tmpData, 0, 0);
ctx.lineWidth = 10;
                    
ctx.strokeStyle = 'rgba(0, 255, 0, 1)';
// snip ...





Limited Shadow Offset

The Problem
IE9 places an arbitrary limit on how high you can set shadow offsets using shadowOffsetX and shadowOffsetY.  Brief testing shows that the limit seems to be dependent on various random factors.  I have not yet reverse engineered the algorithm for how the limit is determined, but so far I have usually seen it to be a couple thousand pixels.

Why This Matters
I am sure that many people reading this think that I am complaining about an inconsequential implementation detail, but it actually does matter.  For all of the great things that canvas has to offer, it lacks in its ability to create soft lines.  Fortunately though, it can do a lot of fancy stuff with shadows, so you can draw with soft lines by drawing out of the canvas’ viewport and casting a shadow over to where you need the soft lines.  If you plan to make complex drawings on a large canvas, and you do not want to worry about your fake lines that are casting shadows coming into view when you pan and zoom, it is helpful to be able to set the shadow offset to a really large number.

Test Case

ctx.lineWidth = 10;                                                                                                            
    
ctx.shadowColor = 'rgba(255, 0, 0, 1)';
ctx.shadowBlur = 40;
ctx.shadowOffsetX = 10000;
ctx.shadowOffsetY = 0;                                                                                                         
    
ctx.beginPath();                                                                                                               
   ctx.moveTo(-10000, 0);                                                                                                     
   ctx.lineTo(-9900, 100);
ctx.stroke();



Workaround
You can use smaller versions of shadowOffset (though until the algorithm for how the limit is determined is discovered, you will never know for certain if you are safe).  At times you might have to change the offset and split up your strokes to make sure that things that are supposed to remain offscreen stay offscreen.

Edit:
Please see this blog article for further discussion between the author and a Microsoft Technical Evangelist: blogs.msdn.com/b/giorgio/archi…

Recent Journal Entries

We're Hiring Developers

We're looking for talented web developers to join our team! :la: Interested? Check out deviantart.theresumator.com/ap…

Journal Writers