Shop Mobile More Submit  Join Login

Upgrading to jQuery 1.9

Sat May 4, 2013, 11:20 PM

One of the most time consuming tasks involved with running a large website can be upgrading libraries. Of course we do this all the time, but sometimes those libraries are used across the entire site. One of those libraries that we use at deviantART is jQuery. Unfortunately, we got a little bit behind with keeping jQuery up to date and we were still on version 1.7. But over the last couple of months, dt has upgraded the whole deviantART network to jQuery 1.9. As you may already know, this upgrade can be slightly challenging due to the number of changes that break backwards compatibility.


The first thing that everyone attempting to upgrade from an earlier version should use is the new jQuery.migrate plugin. This plugin is bundled with jQuery 1.9 and will generate console warnings when deprecated features are used. Without a doubt, this is the most helpful tool available for finding parts of your site that could break. To use the plugin, first include jQuery, then include the migrate plugin:


<script src="/styles/js/jquery.js">
<script src="/styles/js/jquery.migrate.js">

Also make sure that you enable warnings and traces, so that you can gather as much debugging information as possible. To do so, add this immediately after migrate.js:


jQuery.migrateMute = false;
jQuery.migrateTrace = true;

One of the significant changes in jQuery 1.9 is the removal of .live() and .die(), which have been superseded by .on() and .off(). Last summer, during our yearly summit, these calls were almost entirely removed from our JS. The last couple of live/die calls have now been completely removed.


Another significant issue that had to be resolved was the removal of .data('events'). Although very little of our JavaScript actually used this feature, not addressing it would have caused some serious issues. Resolving this was pretty straight forward and just required maintaining our own cache of attached events when necessary.


There were a couple of other small issues, but one of them was extremely important and it isn't terribly well explained in the migration guide: separating .prop() from .attr(). Simply put, .attr() is for changing HTML markup and .prop() is for changing the properties of a JavaScript object. Without jQuery, you would use element.setAttribute() to change the attributes of an element. Properties would be changed by directly assigning a value to them: element.checked, element.className, etc.


Sometimes the HTML markup and the element properties mirror each other, and sometimes they don't. Here are some examples:


An input element without a type attribute is actually a "text" input:


var $node = $('<input>');
$node.attr('type'); // undefined
$node.prop('type'); // "text"

The "checked" attribute of an checkbox only reflects the default state, not the active state:


var node = $('<input type="checkbox" checked="checked">').get(0);
node.checked = false;
node.getAttribute('checked'); // "checked", even though the checkbox was ticked off

The "value" attribute of an input only reflects the default state, not active state:


var node = $('<input type="text" value="foo">').get(0);
// assuming the value changed to "rubber duck" (by typing into the input) before code is run...
node.value; // "rubber duck"
node.outerHTML; // <input type="text" value="foo">
$(node).parent().html() // same as above, even with jQuery
node.getAttribute('value'); // "foo"

Note that jQuery does consider the value of some properties when using .html(), see this example.


The "href" attribute of a relative link does not represent the full link with protocol and domain:


var $node = $('<a href="/test>Test</a>');
$node.attr('href'); // "/test"
$node.prop('href'); // "http://www.deviantart.com/test"

The "selectedIndex" property of a dropdown tells you the currently selected option index:


var $node = $('<select><option>0</option><option>1</option><option selected>2</option>');
$node.attr('selectedIndex'); // undefined
$node.prop('selectedIndex'); // 2

As you can see, there is a very real difference between .attr() and .prop(). For a long time jQuery has been munging the returns of .attr() to reflect property values. Inevitably, this leads to confusion or outright broken code. Although this split is probably one of the most difficult changes to make correctly, we fully support the decision that jQuery has made.


So, in conclusion, what is the best way to use .attr() and .prop()? This is our recommendation:


// Bad
if (cond) {
   $(node).attr('checked', true);
} else {
   $(node).removeAttr('checked');
}
// Better
$(node).prop('checked', cond);
// Best
node.checked = cond;

dt hopes that this post will help you with your upgrade to jQuery 1.9 and beyond!



Embedding deviantART muro

Tue Feb 12, 2013, 12:00 PM

A new way to use deviantART muro


The deviantART muro team have been beavering away for a little while on a new feature for third-party developers, and today we're pleased to reveal that deviantART muro now has a third-party embedding API.


What does this mean?


In simple terms it means that with a small amount of JavaScript code, you can bring the power of deviantART muro to your website, allowing your visitors to draw or edit images anywhere you like.


To make life easier for you there's a jQuery binding for the API, or a raw HTML/JavaScript example if you prefer to work more directly or use a different framework. There's also a WordPress plugin that can be used as an example of what can be done, or use on your blog if it's run on WordPress.


The jQuery plugin



Using the jQuery plugin is the easiest way to embed deviantART muro in your site, this quick snippet of JavaScript is an example of how little is involved in hooking up to deviantART muro to edit an image and receive the edited image back:


// Embed deviantART muro within the element with id "damuro-goes-here".
$('#damuro-goes-here').damuro({
   // Say what image we want the user to embed.
   background: '../images/crane_squared_by_mudimba_and_draweverywhere.png',
   // We don't want to have deviantART muro load automatically.
   autoload: false
   })
   // Bind a single-use onclick handler to open muro when they click on the splash screen
   .one('click', function () { $(this).damuro().open(); })
   // Chain down to the damuro object rather than $('#damuro-goes-here')
   .damuro()
   // The .damuro() object is a promise, so lets bind a done() and fail() handler.
   .done(function (data) {
           // Here's where you'd save the image, we'll just set the page background as an example
           if (data.image && !/\'/.test(data.image)) {
               $('body').css('backgroundImage', "url('" + data.image + "')");
           }
           $(this).hide().damuro().remove();
       })
   .fail(function (data) {
           $(this).hide().damuro().remove();
           if (data.error) {
               // Something failed in saving the image.
               $('body').append('<p>Sorry, something broke: ' + data.error + '.</p>');
           } else {
               // The user pressed "done" without making any changes.
               $('body').append("<p>Be that way then, don't edit anything.</p>");
           }
       });

If you want to see an example like this in action, take a look at the examples page.


The jQuery plugin also provides a convenient interface to the command API, allowing you to send commands to an embedded deviantART muro, the range of commands is currently fairly limited but it does allow you to apply filters or import new images into layers:


$('#damuro-goes-here').damuro().command('filter', {
   filter: 'Sobel',
   layer:  'Background'
   })
   .promise()
   .done(function (data) {
       alert("The filter was applied.");
   })
   .fail(function (data) {
       alert("There was an error applying the filter: " + data.error);
   });

Raw HTML/JavaScript examples



The jQuery plugin is the recommended way to use the deviantART muro API, but if you don't or can't use jQuery, you'll probably find the Raw HTML/JavaScript reference implementation to be useful, it has no external dependencies and can either be copied directly or used as the basis to write a plugin for the JavaScript framework of your choice.


The interface is less friendly: you'll need to send messages via postMessage() directly and implement your own secure event listener to receive the messages back and take care of setup and teardown yourself.


If you do use this example code to write your own JavaScript framework plugin, please let us know so we can give you a shout out and link to you.


deviantART muro WordPress plugin



If you're stumped for ideas or just want to add deviantART muro to your WordPress blog then the deviantART muro WordPress plugin is for you.


It hooks into WordPress in three places:


  • The Media Library - Now you can draw items directly into your WordPress Media Library.
  • Comments - You can enable deviantART muro in comments, allowing your site visitors to post images with their comments. You can configure moderation independently of text-only comments if you're not too keen on trusting the internet-at-large with uploading images to your blog.
  • Article shortcodes - You can embed an instance of deviantART muro within any article using a [damuro background='filename.jpg'] shortcode, this allows visitors to draw on an image of your choosing and post the result as a comment. How you use this is up to you, but you could use it to ask for critiques on your work or just to run competitions with a starting background.

Licensing and where to get it


All the code in these plugins is open-source under a standard BSD 3-Clause license, the image assets are under a Creative Commons Attribution 3.0 License.


You can find the latest version of all this code in our GitHub repository, or if you prefer you can fetch the latest stable releases from the jQuery plugins site and the WordPress plugins site.


Please note that the open-source license only applies to the plugin code provided on GitHub, not to the core deviantART muro code running on Sta.sh, that remains copyrighted and is the property of deviantART.




#DT and LOGR

Mon Nov 5, 2012, 11:42 AM by fartprincess:iconfartprincess:
It's Friday evening and after a long day, you check the code you were working on into git, have the commit reviewed, accepted, merged, and sync it live. All seems right with the world. You let out a sigh of relief, back your chair away from your desk, and walk away in a satisfied mist of ease. In fact, you're excited because you're going to a concert with your friends tonight.

But then, twenty minutes after you leave, it begins. Errors. Fatal errors. And you're not around to know. So what happens?

In dt, we look out for one another. One of the ways we manage to do this is through an error logging service we've built called Logr. If you read our article on DRE, you'll remember we mentioned a variety of functions we use to generate messages within DRE. One of these is called dre_alert().

For example, you set up an alert in your code that gets triggered when a draft in Sta.sh Writer can't be loaded:

dre_alert("Failed to load draft");


And it seems to be happening every 5 seconds. Oops. It turns out the error is caused by a simple typo. But fortunately everyone in our team gets e-mailed when these severe types of errors are logged and since we're situated all over the world, someone is almost always around to be accountable for problems and emergencies. So yury-n kindly steps in, fixes the problem, and everyone is happy again, yay.

How does LOGR work?


We store both a message and a "payload" (any additional debugging information you pass along) along with host and backtrace information to help figure out the chain of events that led to the alert. All of this information gets sent to a Redis server. We rotate the data using a round-robin algorithm similar to RRDTool to keep the memory usage constant despite the fact that we store millions of different messages within LOGR.

Is it just for emergencies?


Nope. We use it for several things, often things that are recognized problems but not emergencies. Every time the alert happens, the incident gets grouped together with other alerts of the same type, so we can get a better picture of how frequently it's occurring, any patterns, or if there is a particular reason, using the payloads available, of why it's occurring.

We can tag our alerts however we want. So say we have 5 alerts set up for deviantART muro. We can tag each as so and then search for it later to find alerts related to that part of the site. Cool, right?

We also can set thresholds on alerts. Once established, if the incident occurs more than the number of times allowed, an e-mail will be sent to a particular developer.

For example, in this particular alert, $allixsenos is set up to receive an e-mail any time this particular alert happens more than 40 times in one hour:



Mousing over the chart, we can see a bit more information, like the exact number of occurrences that happened at different points in time.

And, as mentioned, we also get a nice payload of information:



With LOGR, we're able to get an idea of which of these alerts are exceeding their thresholds, which are happening frequently, which alerts are most recent, and when they were first seen. This helps us easily rule out which events were transitory or already resolved and which are continuing to happen and need fast attention.

We also generate histograms so that we can see a breakdown of how the error is occurring. This is available for any piece of data stored so that you can see different variations that occur. For example, we could see how the error is occurring, spread out across different servers:



But again, this isn't limited to just servers. It's available for any data stored in the payload.

We're always looking for ways to improve LOGR to help us spot problems with the site in a timely fashion to minimize the amount of impact it has on deviants so you can continue to enjoy the site as usual :)


How We Debug in DT

Thu Sep 13, 2012, 5:31 PM by fartprincess:iconfartprincess:
As developers, we often have to debug complex data across several different layers: server-side PHP, Javascript, load time, server information, memcache events, how pages are routed, along with several other pieces of data.

In dt, we use an internal system called DRE (Developer Runtime Environment), which allows us to grab information about how the code we write is executed while making a minimal impact on the code itself.

What makes DRE especially awesome?


We can use it to help debug and solve problems that other users are having. To accomplish this, once we are aware that a particular user is having an issue, we can add DRE logging to this user, usually limiting the logging to the part of the site in which they are having the problem. We can then use the log to check for any errors that might show up.

For example, say a user named testera is having trouble with a journal. We can log debug data for him when he visits journal pages. Examining that log, several pieces of information are stored, many of which will give us a better idea of where the point of failure is. We might want to check privileges the user has or what roles they have in a group if that's where the journal is being posted.






We take this type of logging very seriously, so don't be :paranoid:, because we only use it as a last resort, typically only using it within our own personal development spaces with test data.

We also store the logs so we're able to link them to other developers (and even then, we only store those logs for a few hours).

Why is it helpful to developers?


Because we load all debug information in a single window, it is accessible either by clicking a box at the top of any page or through F2. DRE can log simple data types but it can also provide a highly readable breakdown of complex objects using nested tables.



We also use it to set up alerts. If a high-risk condition occurs on a particular page, the box used to display DRE will turn red and pulsate to make it more apparent to us that there is a problem. This could be something as simple as a failure to retrieve data or something as severe as a failure to include a file (or worse, a fatal error).

More importantly, it allows us to trace data and debug across multiple layers, including processes that are non-web-facing or for asynchronous tasks, where the data may not be immediately available at runtime.

It's not just for fixing bugs; it's also for helping us spotting slow-loading modules and queries and seeing where we can better optimize the number of queries we do need to run on any given page.

For example, on this page, we have 12 separate queries that run:



This is pretty good, given all the data that has to be loaded to make DeviantArt run.

But on a different page, we might have 52 queries, some which are very similar and loading data from the same tables. If we can find a way to cull them together, we can reduce the number of queries, which will help the site perform better. We have a way of handling this in dt, a technique called wenching (which is combining several queries into a single query--we'll get to this in a future article), but we might not be aware that something needs to be wenched until we get a composite look of everything taking place on the page. This is where DRE comes into play.

We can also use it for profiling to see when a file in particular is taking a long time to load. This might be because the code in that file has inefficiencies that need to be refactored. For example, here's a page just to show you how this might look:



Eek. Bad, right? Fortunately, this isn't something we see too much of. And when we do, we try to remedy it.

So how does it work?


This is a silly impractical example, but it'll do.

$orders = $this->get_orders();
dre_trace('orders', $orders);


If get_orders() returns an object of orders, DRE will trace that object into a table, labeled "orders". Simple enough.

"This sounds like it could be kind of messy when you have a bunch of developers all using it! If everyone is using DRE, the panel would be completely overwhelmed with junk I don't care about!"

Yep. That's why we use dre_trace() sparingly. We have several functions for DRE, one which takes a parameter to filter the message to only those who want to see it. For example, if I wanted to make sure only I saw $orders from the aforementioned example in our DRE panel:

$orders = $this->get_orders();
dre_cond_trace('fartprincess', 'orders', $orders);


At the end of the day, this utility saves developers a massive amount of time.


Last year, we released the deviantART and Sta.sh APIs, our first official support for third-party app integrations. Eager developers have asked for many new APIs since then, anxious to step away from page-scraping and toward officially supported integration. We're eager for that too! :eager: 

Today we are happy to announce three new Sta.sh APIs which give developers greater access to their users' Sta.shes!

---

The New Sta.sh APIs

  • Fetch Submission Media - Now developers can request the filesize, dimensions and URL for the original media associated with any Sta.sh submission.

  • Fetch Submission or Folder Metadata - Apps can now get all of a submission's dirty details (keywords, artist comments, thumbnail URLs, etc). It can also be used with Sta.sh folders for greater organization capabilities!

  • List Folders and Submissions - My personal favorite and the workhorse of this release: this API call gives developers full read access to a user's Sta.sh in an intelligent way. We'll look more at the tech behind this call below.
--- 

The Challenges of a Delta

List Folders and Submissions (aka /delta), the most powerful call in this release, uses an incremental approach to allow apps to retrieve and store the current state of a user's Sta.sh in the most efficient way possible. By using the current state, developers and apps can recreate the Sta.sh experience on any device with any interface they desire. Let's look at what it does, how it works, and some of the challenges that dT faced during development.

Users can have up to 10GB of deviation submissions in their Sta.sh - literally thousands of 1Mb submissions. Each submission belongs to a folder, and folders can have many submissions. For an application to represent this to the user, it needs to know everything about the submissions as well as the containing folders. Because submissions can be updated, published, deleted, and moved between folders, an application needs to constantly check for the data.

Challenge 1: How can a third party application check the status of thousands of deviation submissions and Sta.sh folders repeatedly without drastically increasing server load and lagging the app?

Challenge.. Accepted: The solution was inspired by Dropbox's /delta API (though it's fundamentals can be seen throughout the world of computer science). The first time an application loads a user's Sta.sh state, we give it everything. All of the folders, all of the submissions. That kind of transmission could be megabytes, so we chunk it up into result sets of roughly 120 submissions. The application makes as many calls as necessary to retrieve the full list, and stores all of this information locally; when it completes, it will contain the exact state of that user's Sta.sh.

We also give the app a unique cursor which represents that exact state of that user's Sta.sh. After the user interacts with Sta.sh (submitting new deviations, modifying existing ones, publishing and deleting, moving deviations between folders), the application can send us the cursor and we'll give it everything that has changed. Checking for updates becomes a very fast, repeatable process for the application.

Challenge 2: How do we build the list of modified submissions and folders?

Challenge.. Accepted: Let's reconsider the data challenge above: 10GB of deviations = thousands of submissions = tons of data. Per user. Storing the exact state of a deviation at each step in the process would.. well, it'd be like striking the immovable object with the unstoppable force. Chuck Norris would disappear from existence. :confused:

Aiming to save ol' Chuck, we implemented a circular log table to track each user action in Sta.sh; we can store as many as one hundred and sixty thousand user actions. Anything above that wraps the table and automatically deletes old entries.

Each cursor corresponds to a specific log entry, so we can quickly compile a list of modified items since that cursor. Note that we do not record the state of the item in the log - we just record the fact that it changed in some way. This lets us fetch data for a small list on request, making the delta call cheap and fast.

Challenge 3: What happens when the circular table rolls over and old entries are lost? What if my application's last cursor becomes invalid? What if a submission is added and deleted between delta calls?

Challenge.. Accepted: The first two questions are examples of edge conditions where an incremental approach breaks down. To compensate, we have a special flag called reset that tells the application it needs to throw away its stored data and start over.

Because we only log the change in items, the list of those changes is likely to contain useless transactions like an submission that was added and deleted between delta calls. Applications don't care about this kind of entry - it won't appear in the app anywhere, doesn't exist anymore, and is simply confusing to see in the delta list. We applied an extra layer of filtering on the changelog to ensure that the application receives relevant, useful updates.

---

That's cool and all.. but where is the API for [awesome feature X]?

As kouiskas pointed out in last year's journal, adding new APIs is a time-consuming process; once we release an API, we want to support it forever. This limits our choices to APIs that are stable, scalable, and compatible with our long-term development goals. 

There have been numerous requests for great APIs over the last year; unfortunately, we can't build them all. Many would rely on infrastructure that will be refactored (e.g., Message Center) or that is in the process of being refactored (e.g., Search). 

Does this mean you should stop requesting those APIs? Not at all. Keep the requests coming! If you're an app developer and really, really want to see a specific new API call, here are some things that will help us review your request:
  1. Give us some stats. 
    If you have a successful app that is being used daily by deviants worldwide, growing in popularity and proving a benefit to the community, throw some numbers our way. Although popularity doesn't affect what is and isn't possible, it can be an important factor in deciding where to concentrate our efforts for the most benefit to most users.

  2. Show us what your app will do with the call. 
    Most of you already have very successful dA apps in the desktop and mobile world. We realize that there is a lot of DiFi hacking and page scraping necessary to make your app do what you want it to do. Show us how the new API will reduce your reliance on these methods. Include screen captures if you can, we like to look at shiny XHR logs and fail messages.

  3. Be specific and technical.
    We're developers, just like you. If you see a need for an official API, take the time to help us understand exactly what you want from the API (request parameters and result objects). This doesn't mean that we will honor your spec request exactly (we do what we can with the infrastructure we have), but it will help us understand your need more clearly.

Recent Journal Entries

We're Hiring Developers

We're looking for talented web developers to join our team! :la: Interested? Check out deviantart.theresumator.com/ap…

Journal Writers