Drupal Fire - Quick Roundup from important Drupal blogs and sites

Subscribe to Drupal Fire - Quick Roundup from important Drupal blogs and sites feed
News, views, tips, and tricks from the best Drupal developers, designers, and writers. All Drupal, all the time.
Updated: 2 hours 37 min ago

Trip Report: Forward 4 Web Technology Summit

Thu, 2016-02-25 19:00

Four Kitchens (via DrupalFire)

The second week of February, Web Chefs Patrick and Flip visited San Francisco for the Forward 4 Web Technology Summit.

Categories: Drupal Universe

Open Mapping Happy Hour

Thu, 2016-02-25 00:00

Development Seed (via DrupalFire)

Yesterday, Development Seed hosted an Open Mapping Happy Hour at our office in Blagden Alley. We were delighted that over 100 open mappers and FedGISers braved the rain for some great conversation and scheming.

Open Mapping Happy Hour

A few highlights of last night’s presentations include:

Thanks to everyone who came out last night. We had an awesome time hanging out and talking open mapping. Thanks also to Mapbox for cohosting the event.

Categories: Drupal Universe

Why your digital strategy is necessary

Wed, 2016-02-24 20:03

Four Kitchens (via DrupalFire)

[Arnold Schwartzenegger voice] What is a CDO and what do they do? We’ll answer that and why it’s important to consider when planning your digital strategy.

Categories: Drupal Universe

New and shiny vs Good old software

Wed, 2016-02-24 09:06

Lullabot (via DrupalFire)

«It depends». That may be the most used sentence when evaluating the correct technology for a generic challenge. Choosing the right technology is not a trivial problem. When the needs are poorly defined, or not defined at all, it may not even be possible to suggest a good solution. But even if the problem is broken down into simple technical challenges, there are multiple factors that influence the implementation of the solution. Budget constraints, internal knowledge, timelines, resourcing issues, corporate policies, integration with existing systems, technical feasibility, and a long etcetera will influence the final decision.

Leaving these other—extremely valid—considerations aside, let’s focus on the technical part. Often times you have different frameworks, applications, libraries, SAAS, … to provide a solution with different degrees of success. The sheer size of this list of options can complicate the decision. It is common practice to check what everyone else is doing. Doing market research is a good first step, but it should not determine the final decision.

Many times when a new technology arises, there is media hype. All of a sudden many computer science related blogs and news sites are flooded with posts explaining the merits of this groundbreaking new solution. Often times, these communications are very passionate and contagious, and that’s good! Their goal is to get other people to try the new tools. Information flows rapidly and allows people that are solving that particular problem to assess if they want to try a new direction.

This situation can result in what I call the shiny effect. I admit that I have been blinded by it in the past. I have spent large amounts of time learning frameworks that turned out to be great for only a narrow set of use cases. That has made me cautious about how I approach new tools, even though I still get excited about them.

One widely used argument is based on authority. (If all these multi-milion dollar organizations are using this solution for the exact same problem I have, then I should do the same. After all, they would never choose to implement it without careful analysis.) Resorting to the argument from authority will not always lead you to a reliable answer. The market trend will not always be accurate, there are multiple examples of that.

I remember how in a not so distant past, NoSQL databases were postulated by some people as the future, some people went so far as to say that SQL was dead. There was a time when Apache was to be completely usurped by other alternatives. The LAMP stack was supposed to be replaced by the MEAN stack, without leaving any trace. SOAP and RPC were to disappear because of REST, while REST is now irrelevant because of GraphQL. Also, at the present time no one was supposed to be using Basic Auth. It goes without saying that the opposite happened as well: prophecies of emerging solutions being doomed to disappear that were never fulfilled.

None of that happened. All the tools and technologies listed above have found a way to coexist and share the spectrum of solutions. Even when they seemed to overlap at the beginning, the emerging alternatives ended up being only a better solution for a subset of scenarios in most of the cases. That is a very big win for everyone. We now have two different tools that are very good at solving similar problems. Even if you can build a search backend in MySQL, you are probably going to have a better experience if you do it with Solr, for instance.

Disregarding well-proven technological solutions, with proliferous ecosystems, is dangerous. Failing to know new solutions that may prove to be better at solving your problem is just as dangerous. We really need to take the time to understand the problem that the new solution is fixing before jumping into it with both feet. Carey Flichel attributes this, amongst other causes, to boredom and lack of understanding in Why Developers Keep Making Bad Technology Choices.

It’s no surprise then that we want to try something new, even if we’ve adequately solved a problem before. We are natural puzzle solvers, and sometimes you just want to try a new puzzle.

Choosing the right solution often involves checking what everyone else is doing, and then analyzing the problem for yourself while taking all the options into consideration. You must not trust new solutions just because they're new anymore than you trust old solutions just because they're old. Instead, zero in on the problem to be solved and find the perfect solution regardless of the buzz. Keep every technology solution on the table until you understand the nuances of the problem space, and let that be your guiding light. Being aware of all the technologies involved, and knowing what’s the best choice, takes time and a lot of research. Many times this will require a software architect to guide you.

Categories: Drupal Universe

Fast, easy satellite monitoring

Wed, 2016-02-24 00:00

Development Seed (via DrupalFire)

The new Astro Digital Imagery Browser launched today. It is incredibly easy to process satellite imagery for the places that you care about – today and into the future. Use the platform to track urban sprawl, detect illegal logging, or monitor the agricultural modernization project that you are about to kick off.

Astro Digital Imagery Browser
Fast and responsive. We use Mapbox Vector Tiles to quickly filter through hundreds of thousands of satellite metadata records right in your browser.

Categories: Drupal Universe

A history of JavaScript across the stack

Mon, 2016-02-22 17:10

Dries Buytaert (via DrupalFire)

Did you know that JavaScript was created in 10 days? In May 1995, Brendan Eich wrote the first version of JavaScript in 10 days while working at Netscape.

For the first 10 years of JavaScript's life, professional programmers denigrated JavaScript because its target audience consisted of "amateurs". That changed in 2004 with the launch of Gmail. Gmail was the first popular web application that really showed off what was possible with client-side JavaScript. Competing e-mail services such as Yahoo! Mail and Hotmail featured extremely slow interfaces that used server-side rendering almost exclusively, with almost every action by the user requiring the server to reload the entire web page. Gmail began to work around these limitations by using XMLHttpRequest for asynchronous data retrieval from the server. Gmail's use of JavaScript caught the attention of developers around the world. Today, Gmail is the classic example of a single-page JavaScript app; it can respond immediately to user interactions and no longer needs to make roundtrips to the server just to render a new page.

A year later in 2005, Google launched Google Maps, which used the same technology as Gmail to transform online maps into an interactive experience. With Google Maps, Google was also the first large company to offer a JavaScript API for one of their services allowing developers to integrate Google Maps into their websites.

Google's XMLHttpRequest approach in Gmail and Google Maps ultimately came to be called Ajax (originally "Asynchronous JavaScript and XML"). Ajax described a set of technologies, of which JavaScript was the backbone, used to create web applications where data can be loaded in the background, avoiding the need for full page refreshes. This resulted in a renaissance period of JavaScript usage spearheaded by open source libraries and the communities that formed around them, with libraries such as Prototype, jQuery, Dojo and Mootools. (We added jQuery to Drupal core as early as 2006.)

In 2008, Google launched Chrome with a faster JavaScript engine called V8. The release announcement read: "We also built a more powerful JavaScript engine, V8, to power the next generation of web applications that aren't even possible in today's browsers.". At the launch, V8 improved JavaScript performance by 10x over Internet Explorer by compiling JavaScript code to native machine code before executing it. This caught my attention because I had recently finished my PhD thesis on the topic of JIT compilation. More importantly, this marked the beginning of different browsers competing on JavaScript performance, which helped drive JavaScript's adoption.

In 2010, Twitter made a move unprecedented in JavaScript's history. For the Twitter.com redesign in 2010, they began implementing a new architecture where substantial amounts of server-side code and client-side code were built almost entirely in JavaScript. On the server side, they built an API server that offered a single set of endpoints for their desktop website, their mobile website, their native apps for iPhone, iPad, and Android, and every third-party application. As a result, they moved much of the UI rendering and corresponding logic to the user's browser. A JavaScript-based client fetches the data from the API server and renders the Twitter.com experience.

Unfortunately, the redesign caused severe performance problems, particularly on mobile devices. Lots of JavaScript had to be downloaded, parsed and executed by the user's browser before anything of substance was visible. The "time to first interaction" was poor. Twitter's new architecture broke new ground by offering a number of advantages over a more traditional approach, but it lacked support for various optimizations available only on the server.

Twitter suffered from these performance problems for almost two years. Finally in 2012, Twitter reversed course by passing more of the rendering back to the server. The revised architecture renders the initial pages on the server, but asynchronously bootstraps a new modular JavaScript application to provide the fully-featured interactive experience their users expect. The user's browser runs no JavaScript at all until after the initial content, rendered on the server, is visible. By using server-side rendering, the client-side JavaScript could be minimized; fewer lines of code meant a smaller payload to send over the wire and less code to parse and execute. This new hybrid architecture reduced Twitter's page load time by 80%!

In 2013, Airbnb was the first to use Node.js to provide isomorphic (also called universal or simply shared) JavaScript. In the Node.js approach, the same framework is identically executed on the server side and client side. On the server side, it provides an initial render of the page, and data could be provided through Node.js or through REST API calls. On the client side, the framework binds to DOM elements, "rehydrates" (updates the initial server-side render provided by Node.js) the HTML, and makes asynchronous REST API calls whenever updated data is needed.

The biggest advantage Airbnb's JavaScript isomorphism had over Twitter's approach is the notion of a completely reusable rendering system. Because the client-side framework is executed the same way on both server and client, rendering becomes much more manageable and debuggable in that the primary distinction between the server-side and client-side renders is not the language or templating system used, but rather what data is provisioned by the server and how.

Universal Javascript

In a universal JavaScript approach utilizing shared rendering, Node.js executes a framework (in this case Angular), which then renders an initial application state in HTML. This initial state is passed to the client side, which also loads the framework to provide further client-side rendering that is necessary, particularly to “rehydrate” or update the server-side render.

From a prototype written in 10 days to being used across the stack by some of the largest websites in the world, long gone are the days of clunky browser implementations whose APIs changed depending on whether you were using Netscape or Internet Explorer. It took JavaScript 20 years, but it is finally considered an equal partner to traditional, well-established server-side languages.

Categories: Drupal Universe

Weekly Watercooler #136

Mon, 2016-02-22 17:00

Four Kitchens (via DrupalFire)

In this edition: Apps and extensions we love! Web Chef SANDCamp sessions! SXSW parties we’re hosting! And four new ways to become a Web Chef. Oh, my!

Categories: Drupal Universe

Drupal Association Elections 2016

Thu, 2016-02-18 22:57

Matthew Saunders (via DrupalFire)

Election 2016As you might know, I’ve been an elected At Large member of the Drupal Association board for the last two years. I’ve been chairing the Governance Committee. Some highlights of that work includes changes that implemented term limits for Board members, extension of the term for At Large member to two years, liaising with the community on issues they wanted to discuss, and a myriad of activities related to good governance.

drupal association election 2016drupal

Categories: Drupal Universe

Learning about Drupal on the edge of the ocean

Thu, 2016-02-18 17:27

Four Kitchens (via DrupalFire)

It’s time for SANDCamp 2016, the annual Drupal camp! This year, Four Kitchens has a strong presence along with many other great people and organizations. Here’s a preview of the sessions we’re presenting this year.

Categories: Drupal Universe

Welcome Ali!

Thu, 2016-02-18 00:00

Development Seed (via DrupalFire)

We’re happy to welcome Ali Felski to the Development Seed family! Ali has led design teams at Sunlight Foundation and iStrategyLabs. She is going to help us improve design practices across Development Seed, including integrating UX and usability research into the fabric of each project.

Ali and a very small horse

A native Yooper, Ali is more recently a proud Washingtonian, appreciating the weather down here a little more. Over the past decade, Ali has brought good design to the far corners of Washington DC. She improved usability at the CIA, built the Sunlight Foundation’s design team from the ground up, and systematized usability research at iStrategyLabs, where she served as user experience director. Not to mention winning Best Personal Portfolio at SXSW. And while her dream job would be as a taste-tester of artisanal chocolate, she’s convinced us that transforming design through UX thinking is a close second.

High-five Ali on Twitter, and follow her Github.

Categories: Drupal Universe

Twenty Years of India at Night

Thu, 2016-02-18 00:00

Development Seed (via DrupalFire)

Satellite imagery is among the most powerful open data. The Defense Meteorological Satellite Program captures images of the earth just after dusk every night–and it’s been doing so for over 20 years. Working with the World Bank and the University of Michigan, we’ve extracted and organized the data from every night for the past two decades in all of India’s 600,000 villages.

These six billion data points[1] that power India Night Lights tell a rich story of India’s urban development and electrification from 1992-2012. The data is complex and hard to work with. But from the chaos emerges real insights, such as role of politics in service provision and the the impact of state funded electrification projects.

interact with 20 years of nighttime lights for 600,000 villages

The India lights explorer is the first of its kind, but more countries will roll out soon. The software that powers the Night Lights API and the Night Lights Data Explorer is all available on github.

  1. Those checking our math at home, some villages have multiple readings per night.
Categories: Drupal Universe

Digital Black Swans

Wed, 2016-02-17 19:00

Four Kitchens (via DrupalFire)

How can you prepare for something you can’t prepare for? What happens after an out-of-the-blue paradigm shift? How can you strategize for a digital Black Swan?

Categories: Drupal Universe

Javascript aggregation in Drupal 7

Wed, 2016-02-17 11:20

Lullabot (via DrupalFire)

Javascript aggregation in Drupal 7

What is it? Why should we care?

Javascript aggregation in Drupal is just what it sounds like: it aggregates Javascript files that are added during a page request into a single file. Modules and themes add Javascript using Drupal’s API, and the Javascript aggregation system takes care of aggregating all of that Javascript into one or more files. Drupal does this in order to cut down on the number of HTTP requests needed to load a page. Fewer HTTP requests is generally better for front-end performance.

In this article we’ll take a look at the API for adding Javascript, paying specific attention to the options affecting aggregation in order to make best use of the system. We’ll also take a look at some common pitfalls to look out for and how you can avoid them using Advanced Aggregation (AdvAgg) module. This article will mostly be looking at Drupal 7, however much of what we’ll cover applies equally to Drupal 8, and we’ll take a look specifically at what’s changed in Drupal 8.

If fewer is better, why multiple aggregates?

Before we dig in, I said that fewer HTTP requests is generally better for front-end performance. You may have noticed that Drupal creates more that one Javascript aggregate for each page request. Why not a single aggregate? The reason Drupal does this is to take advantage of browser caching. If you had a single aggregate with all the Javascript on the page, any difference in the aggregate from page to page would cause the browser to download a different aggregate with a lot of the same code. This situation arises when Javascript is conditionally added to the page, as is the case with many modules and themes. Ideally, we would group Javascript that appears on every page apart from conditionally added Javascript. That way the browser would download the every page Javascript aggregate once and subsequently load it from cache on other page requests on your website. Drupal’s Javascript aggregation attempts to give you the ability to do exactly that, striking a balance between making fewer HTTP requests, and leveraging browser caching.

The Basics

Let’s take a step back and briefly go over how to use Javascript aggregation in Drupal 7. You can turn on Javascript aggregation by going to Site Configuration > Development > Performance in your Drupal admin. Check off "Aggregate Javascript files" and Save. That’s it. Drupal will start aggregating module and theme Javascript from there on out.

Using the API

Once you’ve enabled Javascript aggregation, great! You’re on your way to sweet, sweet performance gains. But you’ve got modules and themes to build, and Javascript to go along with them. How do you make use of Javascript aggregation provided by Drupal? What kind of control do you have over the process?

There are a couple API entry points for adding Javascript to the page. Most API entry points have parameters that let you tell Drupal how to aggregate your Javascript. Let’s look at each in turn.

drupal_add_js

This is a loaded function that is used to add a Javascript file, externally hosted Javascript, a setting, or inline Javascript to the page. When it comes to Javascript aggregation, only Javascript files are aggregated by Drupal, so we’re going to focus on adding files. The other types (inline and external) do have important impacts on Javascript aggregation though, which we’ll get into later.

hook_library/drupal_add_library

Drupal also provides a mechanism to register Javascript/CSS "libraries" associated with a module. This is nice because you can specify the options to pass to drupal_add_js in one place. That gives you the flexibility to call drupal_add_library in multiple places, without having to repeat the same options. The other nice thing is that you can specify dependencies for your library and Drupal will take care of resolving those dependencies and putting them in the right order for you.

I like to register every script in my module as a library, and only add scripts using drupal_add_library (or the corresponding #attached method, see below). This way I’m clearly expressing any dependencies and in the case where my script is added in more than one place, the options I give the aggregation system are all in one place in case I need to make a change. It’s also a nice way to safely deprecate scripts. If all the dependencies are spelled out, you can be assured that removing a script won’t break things.

Instructions for using hook_library and drupal_add_library are well documented. However, one important thing to note is the third parameter to drupal_add_library, every_page, which is used to help optimize aggregation. At registration time in hook_library, you typically don’t know if a library will be used on every page request or not. Especially if you’re registering libraries for other module authors to use. This is why drupal_add_library has an every_page parameter. If you’re adding a library unconditionally on every page, be sure to set every_page parameter to TRUE so that Drupal will group your script into a stable aggregate with other scripts that are added to every page. We’ll discuss every_page in more detail below.

#attached

attached is a render array property that lets you add Javascript, CSS, libraries and arbitrary data to the page. #attached is the preferred approach for adding Javascript in Drupal. In fact, drupal_add_js and drupal_add_library are gone in Drupal 8 in favor of #attached. #attached is nice since it is part of a render array and can be easily altered by other modules and themes. It’s also essential for places where render arrays are cached. With caching in play, functions that build out a render array are bypassed in favor of a cached copy. In that scenario, you need to have your Javascript addition included in #attached on the render array, because a call to drupal_add_js/drupal_add_library in your builder function would otherwise be missed.

Drupal developers often fall back on drupal_add_js for lack of a render array in context. It’s common to add Javascript in places like hook_init, preprocess and process functions, where there is no render array available to add your Javascript via #attached. In these scenarios, consider whether your Javascript is more appropriately added via a hook_page_build, or hook_node_view, where there is a render array available to add your script via #attached. Not only will you begin to align yourself with the Drupal 8 way of doing things, but you open yourself up to being able to use the render_cache module, which can gain you some performance. See the documentation for drupal_process_attached for more information.

Controlling aggregation

Let’s take a close look at the parameters affecting Javascript aggregation. Drupal will create an aggregate for each unique combination of the scope, group, and every_page parameters, so it’s important to understand what each of these mean.

scope

The possible values for scope are ‘header’ and ‘footer’. A scope of ‘header’ will output the script in the

tag, a scope of ‘footer’ will output the script just before the closing tag.

group

The ‘group’ option takes any integer as a value, although you should stick to the constants JS_LIBRARY, JS_DEFAULT and JS_THEME to avoid excessive numbers of aggregates and follow convention.

every_page

The ‘every_page’ option expects a boolean. This is a way to tell the aggregation system that your script is guaranteed to be included on every page. This flag is commonly overlooked, but is very important. Any scripts that are added on every page should be included in a stable aggregate group, one that is the same from page to page, such that the browser can make good use of caching. The every_page parameter is used for exactly that purpose. Miss out on using every_page and your script is apt to be lumped into a volatile aggregate that changes from page to page, forcing the browser to download your code anew on each page request.

What can go wrong

There are a few problems to watch out for when it comes to Javascript aggregation. Let’s look at some in turn.

Potential number of aggregates

The astute reader will have noticed that there is a potential for a lot of aggregates to be created. When you combine scope, group and every_page options, the number of aggregates per page can get out of control if you’re not careful. With two values for scope, three for groups and two for every_page, there is potential to climb to twelve aggregates in a page. Twelve seems a little out of control. Sure, we want to leverage browser caching, but this many HTTP requests is concerning. If you’re getting high into the number of aggregates it’s likely that modules and themes adding Javascript are not using the API as best as they could.

Misuse of Groups

As was mentioned, JS_LIBRARY, JS_DEFAULT and JS_THEME are default constants that ship with Drupal to group Javascript. When adding Javascript, modules and themes should stick to these groups. As soon as you deviate, you introduce a new aggregate. Often when people deviate from the default groups, it’s to ensure ordering. Maybe you need your script to be the very first script on the page, so you set your group to JS_LIBRARY - 100. Instead, use the ‘weight’ option to manage ordering. Set your group to JS_LIBRARY and set the weight very low to ensure your first in the JS_LIBRARY aggregate. Another issue I see is themes adding script explicitly under the JS_THEME group. This is logical, after all, it is Javascript added by the theme! However, it’s better to make use of hook_library, declare your dependencies and let the group default to JS_DEFAULT. The order you need is preserved by declaring your dependencies, and you avoid creating a new aggregate unessesarily.

Inline and External Scripts

Extra care should be employed when adding external and inline scripts. drupal_add_js and friends preserve order really well, such that they track the order the function is called for all the script added to the page and respect that. That’s all well and good. However, if an inline or external script is added between file scripts, it can split the aggregate. For example if you add 3 scripts: 2 file scripts and 1 external, all with the same scope, group, and every_page values, you might think that Drupal would aggregate the 2 file scripts and output 2 script tags total. However, if drupal_add_js gets called for the file, then external, then file, you end up getting two aggregates generated with the external script in between for a total of 3 script tags. This is probably not what you want. In this case it’s best to specify a high or low weight value for the external script so it sits at the top or bottom of the aggregate and doesn’t end up splitting it. The same goes for inline JS.

Scripts added in a different order

I alluded to this in the last one, but certain situations can arise where scripts get added to an aggregate in a different order from one request to the next. This can be for any number of reasons, but because Drupal is tracking the call order to drupal_add_js, you end up with a different order for the same scripts in a group and thus a different aggregate. Sadly the same code will sit in two aggregates on the server, with slightly different source order, but otherwise they could have been exactly equal, produced a single aggregate and thus cached by the browser from page to page. The solution in this case is to use weight values to ensure the same order within an aggregate from page to page. It’s not ideal, because you don’t want to have to set weights on every hook_library / drupal_add_js. I’d recommend handling it on a case by case basis.

Best Practices

Clearly, there is a lot that can go wrong, or at least end up taking you to a place that is sub-optimal. Considering all that we’ve covered, I’ve come up with a list of best practices to follow:

Always use the API, never ‘shoehorn’ scripts into the page

A common problem is running into modules and themes that sidestep the API to add their Javascript to the page. Common methods include using drupal_add_html_head to add script to the header, using hook_page_alter and appending script markup to $variables[‘page_bottom’], or otherwise outputting a raw

Use hook_library

hook_library is a great tool. Use it for any and all Javascript you output (inline, external, file). It centralizes all of the options for each script so you don’t have to repeat them, and best of all, you can declare dependencies for your Javascript.

Use every_page when appropriate

The every_page option signals to the aggregation system that your script will be added to all pages on the site. If your script qualifies, make sure your setting every_page = TRUE. This puts your script into a "stable" aggregate that is cached and reused often.

AdvAgg

Advanced CSS/JS Aggregation (AdvAgg) is a contributed module that replaces Drupal’s built in aggregation system with it’s own, making many improvements and adding additional features along the way. The way that you add scripts is the same, but behind the scenes, how the groups are generated and aggregated into their respective aggregates is different. AdvAgg also attempts to overcome many of the scenarios where core aggregation can go wrong that I listed above. I won’t cover all of what AdvAgg has to offer, it has an impressive scope that would warrant it’s own article or more. Instead I’ll touch on how it can solve some of the problems we listed above, as well as some other neat features.

Out of the box the core AdvAgg module supplies a handful of transparent backend improvements. One of those is stampede protection. After a code release with JS/CSS changes, there is a potential that multiple requests for the same page will all trigger the calculation and writing of the same new aggregates, duplicating work. On high traffic sites, this can lead to deadlocks which can be detrimental to performance. AdvAgg implements locking so that only the first process will perform the work of calculating and writing aggregates. Advagg also employs smarter caching strategies so the work of calculating and writing aggregates is done as infrequently as possible, and only when there is a change to the source files. These are nice improvements, but there is great power to behold in AdvAgg’s submodules.

AdvAgg Compress Javascript

AdvAgg comes with two submodules to enhance JS and CSS compression. They each provide a pluggable way to have a compressor library act on the each aggregate before it’s saved to disk. There are a few options for JS compression, JSqueeze is a great option that will work out of the box. However, if you have the flexibility on your server to install the JSMIN C extension, it’s slightly more performant.

AdvAgg Modifier

AdvAgg Modifier is another sub module that ships with AdvAgg, and here is where we get into solving some of the problems we listed earlier. Let’s explore some of the more interesting options that are made available by AdvAgg Modifier:

Optimize Javascript Ordering

There are a couple of options under this fieldset that seek to solve some of the problems laid out above. First up, "Move Javascript added by drupal_add_html_head() into drupal_add_js()" does just what it says, correcting the errant use of the API by module/theme authors. Doing this allows the Javascript in question to participate in Javascript aggregation and potentially removes an HTTP request if it was a file that was being added. Next “Move all external scripts to the top of the execution order” and “Move all inline scripts to the bottom of the execution order” are both an attempt at resolving the issue of splitting the aggregate. This is more or less assuming that external files are more library-esque and inline Javascript is meant to be run after all other Javascript has run. That may or may not be the case depending on what you’re doing and you should definitely use caution. Such as that is, if it works for you, it could be a simple way to avoid splitting the aggregate without having to do manual work of adjusting weights.

Move JS to the footer

As we discussed, you typically add Javascript to one of two scopes, header or footer. A common performance optimization is to move all the Javascript to the footer. AdvAgg gives you a couple choices in how you do that ranging from some to all Javascript moved to the footer. This can easily break your site however, and you definitely need to test and use caution here. The reason of course is that there could be other script that aren’t declaring their dependencies properly, and/or are ‘shoehorning’ their Javascript into the page that are depending on a dependency existing in the header. These will need to be fixed on a case by case basis (submit patches to contrib modules!). If you still have Javascript that really does need to be in the header, AdvAgg extends the options for drupal_add_js and friends to include a boolean scope_lock that when set to TRUE, will prevent AdvAgg from moving the script to the footer. There always seems to be some stubborn ads or analytics provider that insists on presence in the header, and is backed by the business that forces you to make at least one of these exceptions. Another related option is "Put a wrapper around inline JS if it was added in the content section incorrectly". This is an attempt to resolve the problem of inline Javascript in the body depending on things being in the header. AdvAgg will wrap any inline Javascript in a timer that waits on jQuery and Drupal to be defined before running the inline code. It uses a regular expression to search for the scripts, which while neat, feels a little brittle. There is an alternate option to search for the scripts using xpath instead of regex, which could be more reliable. If you try it, do so with care, test first.

Drupal 8

Not a lot is different when it comes to Javascript aggregation in D8, but there have been a few notable changes. First, the aggregation system has been refactored under the hood to be pluggable. You can now cleanly swap out and/or add to the aggregation system, whereas in Drupal 7 it was a difficult and messy affair to do so. Modules like AdvAgg should have a relatively easier time implementing their added functionality. Other improvements to the aggregation system that didn’t quite make the D8 release will now have a clearer path to implementation for the same reason. In terms of its function and the way in which Javascript is grouped for aggregation, everything is as it was in Drupal 7.

The second change centres around adding Javascript to the page. drupal_add_js and drupal_add_library have been removed in Drupal 8 in favor of #attached. The primary motivation is to improve cacheability. With Javascript consistently added using #attached, it’s possible to do better render array caching. The classic example being that if you were using drupal_add_js in your form builder function, your Javascript would be missed if the cached render array was used and the builder function skipped. Caching aside, it’s nice that there is a single consistent way to add Javascript. Also related to this change, is hook_library definitions have gone the way of the dodo. In their place, *.libraries.yml files are now used to define your Javascript and CSS assets in libraries, continuing the Drupal 8 theme of replacing info hooks with YML files.

A third somewhat related change that I’ll point out is that Drupal core no longer automatically adds jQuery, Drupal.js and Drupal.settings (drupalSettings in Drupal 8). jQuery still ships with core, but if you need it, you have to declare it as a dependency in your *.libraries.yml file. There have also been steps taken to reduce Drupal core’s reliance on jQuery. This isn’t directly related to Javascript aggregation per se, but it’s a big change nonetheless that you’ll have to keep in mind if you’re used to writing Javascript in Drupal 7 and below. It’s a welcome improvement with potential for some nice front end performance gains. For more information and specific implementation details, check out the guide for adding Javascript and CSS from a theme and adding Javascript and CSS from a module.

HTTP 2.0

Any article about aggregating Javascript assets these days deserves a disclaimer regarding HTTP 2.0. For the uninitiated, HTTP 2.0 is the new version of the HTTP protocol that while maintaining all the semantics of HTTP 1.1, significantly changes the implementation details. The most noted change in a nutshell is that instead of opening a bunch of TCP connections to download all the resources you need from a single origin, HTTP 2.0 opens a single connection per origin and multiplexes resources on that connection to achieve parallelism. There is a lot of promise here because HTTP 2.0 enables you to minimize the number of TCP connections your browser has to open, while still downloading assets in parallel and individually caching assets. That means that you could be better off not aggregating your assets at all, allowing the browser to maximize caching efficiency without incurring the cost of multiple open TCP connections. In practice, the jury is still out on whether no aggregation at all is a good idea. It turns out that compression doesn’t do as well on a collection of individual small files as it does on aggregated large files. It’s likely that some level of aggregation will continue to be used in practice. However as hosts more widely support HTTP 2.0 and old browsers fall off, this will become a larger area of interest.

Conclusion

Whew, we’ve come to the end at last. My purpose was to go over all of the nuances of Javascript aggregation in Drupal. It’s an important topic when it comes to front end performance, and one that deserves careful consideration when you're trying to eek every last bit of performance out of your site. With Drupal 8 and HTTP 2.0 now a reality, it’ll be interesting to watch as things evolve for the better.

Categories: Drupal Universe

When traffic skyrockets your site shouldn't go down

Wed, 2016-02-17 04:06

Dries Buytaert (via DrupalFire)

This week's Grammy Awards is one of the best examples of the high traffic events websites that Acquia is so well known for. This marks the fourth time we hosted the Grammys' website. We saw close to 5 million unique visitors requesting nearly 20 million pages on the day of the awards and the day after. From television's Emmys to Superbowl advertisers' sites, Acquia has earned its reputation for keeping their Drupal sites humming during the most crushing peaks of traffic.

These "super spikes" aren't always fun. For the developers building these sites to the producers updating each site during the event, nothing compares to the sinking feeling when a site fails when it is needed the most. During the recent Superbowl, one half-time performer lost her website (not on Drupal), giving fans the dreaded 503 Service Unavailable error message. According to CMSWire: "Her website was down well past midnight for those who wanted to try and score tickets for her tour, announced just after her halftime show performance". Yet for Bruno Mars' fans, his Acquia-based Drupal site kept rolling even as millions flooded his site during the half-time performance.

For the Grammys, we can plan ahead and expand their infrastructure prior to the event. This is easy thanks to Acquia Cloud's elastic platform capacity. Our technical account managers and support teams work with the producers at the Grammys to make sure the right infrastructure and configuration is in place. Specifically, we simulate award night traffic as best we can, and use load testing to prepare the infrastructure accordingly. If needed, we add additional server capacity during the event itself. Just prior to the event, Acquia takes a 360 degree look at the site to ensure that all of the stakeholders are aligned, whether internal to Acquia or external at a partner. We have technical staff on site during the event, and remote teams that provide around the clock coverage before and after the event.

Few people know what goes on behind the scenes during these super spikes, but the biggest source of pride is that our work is often invisible; our job well done means that our customer's best day, didn't turn into their worst day.

Categories: Drupal Universe

In memoriam: Richard Burford

Tue, 2016-02-16 04:09

Dries Buytaert (via DrupalFire)

It is with great sadness that we learned last week that Richard Burford has passed away. This is a tragic loss for his family, for Acquia, the Drupal community, and the broader open source world. Richard was a Sr. Software Engineer at Acquia for three and a half years (I still remember him interviewing with me), and known as psynaptic in the Drupal community. Richard has been a member of the Drupal community for 9+ years. During that time, he contributed hundreds of patches across multiple projects, started a Drupal user group in his area and helped drive the Drupal community in the UK where he lived. Richard was a great person, a dedicated and hard-working colleague, a generous contributor to Drupal, and a friend. Richard was 36 years young with a wife and 3 children. He was the sole income earner for the family so a fundraising campaign has been started to help out his family during these difficult times; please consider contributing.

Categories: Drupal Universe

The Uncomplicated Firewall

Wed, 2016-02-10 13:37

Lullabot (via DrupalFire)

Firewalls are a tool that most web developers only deal with when sites are down or something is broken. Firewalls aren’t fun, and it’s easy to ignore them entirely on smaller projects.

Part of why firewalls are complicated is that what we think of as a "firewall" on a typical Linux or BSD server is responsible for much more than just blocking access to services. Firewalls (like iptables, nftables, or pf) manage filtering inbound and outbound traffic, network address translation (NAT), Quality of Service (QoS), and more. Most firewalls have an understandably complex configuration to support all of this functionality. Since firewalls are dealing with network traffic, it’s relatively easy to lock yourself out of a server by blocking SSH by mistake.

In the desktop operating system world, there has been great success in the "application" firewall paradigm. When I load a multiplayer game, I don’t care about the minutiae of ports and protocols - just that I want to allow that game to host a server. Windows, OS X, and Ubuntu all support application firewalls where applications describe what ports and protocols they need open. The user can then block access to those applications if they want.

The OS X firewall UI

Uncomplicated Firewall (ufw) is shipped by default with Ubuntu, but like OS X (and unlike Windows) it is not turned on automatically. With a few simple commands we can get it running, allow access to services like Apache, and even add custom services like MariaDB that don’t ship with a ufw profile. UFW is also available for other Linux distributions, though they may have their own preferred firewall tool.

Before you start

Locking yourself out of a system is a pain to deal with, whether it’s lugging a keyboard and monitor to your closet or opening a support ticket. Before testing out a firewall, make sure you have some way to get into the server should you lock yourself out. In my case, I’m using a LAMP vagrant box, so I can either attach the Virtualbox GUI with a console, or use vagrant destroy / vagrant up to start clean. With remote servers, console access is often available through a management web interface or a "recovery" SSH server like Linode’s Lish.

It’s good to run a scan on a server before you set up a firewall, so you know what is initially being exposed. Many services will bind to ‘localhost’ by default, so even though they are listening on a network port they can’t be accessed from external systems. I like to use nmap (which is available in every package manager) to run port scans.

$ nmap 192.168.0.110
Starting Nmap 6.40 ( http://nmap.org ) at 2015-09-02 13:16 EDT
Nmap scan report for trusty-lamp.lan (192.168.0.110)
Host is up (0.0045s latency).
Not shown: 996 closed ports
PORT STATE SERVICE
22/tcp open ssh
80/tcp open http
111/tcp open rpcbind
3306/tcp open mysql

Nmap done: 1 IP address (1 host up) scanned in 0.23 seconds

Listening on for SSH and HTTP connections makes sense, but we probably don’t need rpcbind (for NFS) or MySQL to be exposed.

Turning on the firewall

The first step is to tell UFW to allow SSH access:

$ sudo ufw app list
Available applications:
Apache
Apache Full
Apache Secure
OpenSSH

$ sudo ufw allow openssh
Rules updated
Rules updated (v6)

$ sudo ufw enable
Command may disrupt existing ssh connections. Proceed with operation (y|n)? y
Firewall is active and enabled on system startup

$ sudo ufw status
Status: active

To Action From
-- ------ ----

OpenSSH ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)

Test to make sure the SSH rule is working by opening a new terminal window and ssh’ing to your server. If it doesn’t work, run sudo ufw disable and see if you have some other firewall configuration that’s conflicting with UFW. Let’s scan our server again now that the firewall is up:

$ nmap 192.168.0.110
Starting Nmap 6.40 ( http://nmap.org ) at 2015-09-02 13:31 EDT
Note: Host seems down. If it is really up, but blocking our ping probes, try -Pn
Nmap done: 1 IP address (0 hosts up) scanned in 3.07 seconds

UFW is blocking pings by default. We need to run nmap with -Pn so it blindly checks ports.

$ nmap -Pn 192.168.0.110
Starting Nmap 6.40 ( http://nmap.org ) at 2015-09-02 13:32 EDT
Nmap scan report for trusty-lamp.lan (192.168.0.142)
Host is up (0.00070s latency).
Not shown: 999 filtered ports

PORT STATE SERVICE

22/tcp open ssh

Nmap done: 1 IP address (1 host up) scanned in 6.59 seconds

Excellent! We’ve blocked access to everything but SSH. Now, let’s open up Apache.

$ sudo ufw allow apache
Rule added
Rule added (v6)

You should now be able to access Apache on port 80. If you need SSL, allow "apache secure" as well, or just use the “apache full” profile. You’ll need quotes around the application name because of the space.

To remove a rule, prefix the entire rule you created with "delete". To remove the Apache rule we just created, run sudo ufw delete allow apache.

Blocking services

UFW operates in a "default deny" mode, where incoming traffic is denied and outgoing traffic is allowed. To operate in a “default allow” mode, run sudo ufw default allow. After running this, perhaps you don’t want Apache to be able to listen for requests, and only want to allow access from localhost. Using ufw, we can deny access to the service:

$ sudo ufw deny apache
Rule updated
Rule updated (v6)

You can also use "reject" rules, which tell a client that the service is blocked. Deny forces the connection to timeout, not telling an attacker that a service exists. In general, you should always use deny rules over reject rules, and default deny over default allow.

Address and interface rules

UFW lets you add conditions to the application profiles it ships with. For example, say you are running Apache for an intranet, and have OpenVPN setup for employees to securely connect to the office network. If your office network is connected on eth1, and the VPN on tun0, you can grant access to both of those interfaces while denying access to the general public connected on eth0:

$ sudo ufw allow in on eth1 to any app apache
$ sudo ufw allow in on tun0 to any app apache

Replace from with on to use IP address ranges instead of interface names.

Custom applications

While UFW lets you work directly with ports and protocols, this can be complicated to read over time. Is it Varnish, Apache, or Nginx that’s running on port 8443? With custom application profiles, you can easily specify ports and protocols for your own custom applications, or those that don’t ship with UFW profiles.

Remember up above when we saw MySQL (well, MariaDB in this case) listening on port 3306? Let’s open that up for remote access.

Pull up a terminal and browse to /etc/ufw/applications.d. This directory contains simple INI files. For example, openssh-server contains:

[OpenSSH]
title=Secure shell server, an rshd replacement
description=OpenSSH is a free implementation of the Secure Shell protocol.

ports=22/tcp

We can create a mariadb profile ourselves to work with the database port.

[MariaDB]
title=MariaDB database server
description=MariaDB is a MySQL-compatible database server.

ports=3306/tcp

$ sudo ufw app list
Available applications:

Apache
Apache Full
Apache Secure
MariaDB
OpenSSH

$ sudo ufw allow from 192.168.0.0/24 to any app mariadb

You should now be able to access the database from any address on your local network.

Debugging and backup

Debugging firewall problems can be very difficult, but UFW has a simple logging framework that makes it easy to see why traffic is blocked. To turn on logging, start with sudo ufw logging medium. Logs will be written to /var/log/ufw.log. Here’s a UFW BLOCK line where Apache has not been allowed through the firewall:

Jan 5 18:14:50 trusty-lamp kernel: [ 3165.091697] [UFW BLOCK] IN=eth2 OUT= MAC=08:00:27:a1:a3:c5:00:1e:8c:e3:b6:38:08:00 SRC=192.168.0.54 DST=192.168.0.142 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=65499 DF PROTO=TCP SPT=41557 DPT=80 WINDOW=29200 RES=0x00 SYN URGP=0

From this, we can see all of the information about the source of the request as well as the destination. When you can’t access a service, this logging makes it easy to see if it’s the firewall or something else causing problems. High logging can use a large amount of disk space and IO, so when not debugging it’s recommended to set it to low or off.

Once you have everything configured to your liking, you might discover that there isn’t anything in /etc with your rules configured. That’s because ufw actually stores its rules in /lib/ufw. If you look at /lib/ufw/user.rules, you’ll see iptables configurations for everything you’ve set. In fact, UFW supports custom iptables rules too if you have one or two rules that are just too complex for UFW.

For server backups, make sure to include the /lib/ufw directory. I like to create a symlink from /etc/ufw/user-rules to /lib/ufw. That way, it’s easy to remember where on disk the rules are stored.

Next steps

Controlling inbound traffic is a great first step, but controlling outbound traffic is better. For example, if your server doesn’t send email, you could prevent some hacks from being able to reach mail servers on port 25. If your server has many shell users, you can prevent them from running servers without being approved first. What other security tools are good for individual and small server deployments? Let me know in the comments!

Categories: Drupal Universe

Turning Drupal outside-in

Wed, 2016-02-10 10:49

Dries Buytaert (via DrupalFire)

There has been a lot of discussion around the future of the Drupal front end both on Drupal.org (#2645250, #2645666, #2651660, #2655556) and on my blog posts about the future of decoupled Drupal, why a standard framework in core is a good idea, and the process of evaluating frameworks. These all relate to my concept of "progressive decoupling", in which some portions of the page are handed over to client-side logic after Drupal renders the initial page (not to be confused with "full decoupling").

My blog posts have drawn a variety of reactions. Members of the Drupal community, including Lewis Nyman, Théodore Biadala and Campbell Vertesi, have written blog posts with their opinions, as well as Ed Faulkner of the Ember community. Last but not least, in response to my last blog post, Google changed Angular 2's license from Apache to MIT for better compatibility with Drupal. I read all the posts and comments with great interest and wanted to thank everyone for all the feedback; the open discussion around this is nothing short of amazing. This is exactly what I hoped for: community members from around the world brainstorming about the proposal based on their experience, because only with the combined constructive criticism will we arrive at the best solution possible.

Based on the discussion, rather than selecting a client-side JavaScript framework for progressive decoupling right now, I believe the overarching question the community wants to answer first is: How do we keep Drupal relevant and widen Drupal's adoption by improving the user experience (UX)?

Improving Drupal's user experience is a topic near and dear to my heart. Drupal's user experience challenges led to my invitation to Mark Boulton to redesign Drupal 7, the creation of the Spark initiative to improve the authoring experience for Drupal 8, and continued support for usability-related initiatives. In fact, the impetus behind progressive decoupling and adopting a client-side framework is the need to improve Drupal's user experience.

It took me a bit longer than planned, but I wanted to take the time to address some of the concerns and share more of my thoughts about improving the user experience topic (and JavaScript frameworks).

To iterate or to disrupt?

In his post, Lewis writes that the issues facing Drupal's UX "go far deeper than code" and that many of the biggest problems found during the Drupal 8 usability study last year are not resolved with a JavaScript framework. This is true; the results of the Drupal 8 usability study show that Drupal can confuse users with its complex mental models and terminology, but it also shows how modern features like real-time previews and in-page block insertion are increasingly assumed to be available.

To date, much of our UX improvements have been based on an iterative process, meaning it converges on a more refined end state by removing problems in the current state. However, we also require disruptive thinking, which is about introducing entirely new ideas, for true innovation to happen. It's essentially removing all constraints and imagining what an ideal result would look like.

I think we need to recognize that while some of the documented usability problems coming out of the Drupal 8 usability study can be addressed by making incremental changes to Drupal's user experience (e.g. our terminology), other well-known usability problems most likely require a more disruptive approach (e.g. our complex mental model). I also believe that we must acknowledge that disruptive improvements are possibly more impactful in keeping Drupal relevant and widening Drupal's adoption.

At this point, to get ahead and lead, I believe we have to do both. We have to iterate and disrupt.

From inside-out to outside-in

Let's forget about Drupal for a second and observe the world around us. Think of all the web applications you use on a regular basis, and consider the interaction patterns you find in them. In popular applications like Slack, the user can perform any number of operations to edit preferences (such as color scheme) and modify content (such as in-place editing) without incurring a single full page refresh. Many elements of the page can be changed without the user's flow being interrupted. Another example is Trello, in which users can create new lists on the fly and then add cards to them without ever having to wait for a server response.

Contrast this with Drupal's approach, where any complex operation requires the user to have detailed prior knowledge about the system. In our current mental model, everything begins in the administration layer at the most granular level and requires an unmapped process of bottom-up assembly. A user has to make a content type, add fields, create some content, configure a view mode, build a view, and possibly make the view the front page. If each individual step is already this involved, consider how much more difficult it becomes to traverse them in the right order to finally see an end result. While very powerful, the problem is that Drupal's current model is "inside-out". This is why it would be disruptive to move Drupal towards an "outside-in" mental model. In this model, I should be able to start entering content, click anything on the page, seamlessly edit any aspect of its configuration in-place, and see the change take effect immediately.

Drupal 8's in-place editing feature is actually a good start at this; it enables the user to edit what they see without an interrupted workflow, with faster previews and without needing to find what thing it is before they can start editing.

Making it real with content modeling

Eight years ago in 2007, I wrote about a database product called DabbleDB. I shared my belief that it was important to move CCK and Views into Drupal's core and learn from DabbleDB's integrated approach. DabbleDB was acquired by Twitter in 2010 but you can still find an eight-year-old demo video on YouTube. While the focus of DabbleDB is different, and the UX is obsolete, there is still a lot we can learn from it today: (1) it shows a more integrated experience between content creation, content modeling, and creating views of content, (2) it takes more of an outside-in approach, (3) it uses a lot less intimidating terminology while offering very powerful capabilities, and (4) it uses a lot of in-place editing. At a minimum, DabbleDB could give us some inspiration for what a better, integrated content modeling experience could look like, with the caveat that the UX should be as effortless as possible to match modern standards.

Other new data modeling approaches with compelling user experiences have recently entered the mainstream. These include back end-as-a-service (BEaaS) solutions such as Backand, which provides a visually clean drag-and-drop interface for data modeling and helpful features for JavaScript application developers. Our use cases are not quite the same, but Drupal would benefit immensely from a holistic experience for content modeling and content views that incorporates both the rich feature set of DabbleDB and the intuitive UX of Backand.

This sort of vision was not possible in 2007 when CCK was a contributed module for Drupal 6. It still wasn't possible in Drupal 7 when Views existed as a separate contributed module. But now that both CCK and Views are in Drupal 8 core, we can finally start to think about how we can more deeply integrate the two user experiences. This kind of integration would be nontrivial but could dramatically simplify how Drupal works. This should be really exciting because so many people are attracted to Drupal exactly because of features like CCK and Views. Taking an integrated approach like DabbleDB, paired with a seamless and easy-to-use experience like Slack, Trello and Backand, is exactly the kind of disruptive thinking we should do.

What most of the examples above have in common are in-place editing, immediate previews, no page refreshes, and non-blocking workflows. The implications on our form and render systems of providing configuration changes directly on the rendered page are significant. To achieve this requires us to have robust state management and rendering on the client side as well as the server side. In my vision, Twig will provide structure for the overall page and non-interactive portions, but more JavaScript will more than likely be necessary for certain parts of the page in order to achieve the UX that all users of the web have come to expect.

We shouldn't limit ourselves to this one example, as there are a multitude of Drupal interfaces that could all benefit from both big and small changes. We all want to improve Drupal's user experience — and we have to. To do so, we have to constantly iterate and disrupt. I hope we can all collaborate on figuring out what that looks like.

Special thanks to Preston So and Kevin O'Leary for contributions to this blog post and to Wim Leers for feedback.

Categories: Drupal Universe

InVision webinar recap: The future of the CMS

Thu, 2016-02-04 17:00

Four Kitchens (via DrupalFire)

December saw the 22nd installment of InVision’s webinar series on design and tech. This episode features our own Todd Ross Nienkerk presenting, with InVision’s Margaret Kelsey moderating. The following is a summary; to watch the whole presentation, along with the Q&A period, head over to InVison’s webinar recap site.

What is the future of the CMS?

We need to rethink how we manage, publish, and consume content.

Categories: Drupal Universe

( ͡° ͜ʖ ͡°) Let's think about memes

Wed, 2016-02-03 20:00

Four Kitchens (via DrupalFire)

Digital strategy is a business that needs to understand how ideas move between users— how new fads develop and propagate on the Internet— and looking at memes is one way we can begin to think about the transmission of culture and the spread of ideas.

Categories: Drupal Universe

Giving Back in 2016

Sat, 2016-01-23 23:18

Larry Garfield (via DrupalFire)

Three years ago, I ended 2012 with a call to the Drupal community to Get Off the Island. Mainly I wanted to encourage Drupal developers to prepare themselves for the major changes coming in Drupal 8 by connecting with other PHP projects and with the broader community, and called on people to attend non-Drupal conferences in order to visit and learn from other communities.

read more

Categories: Drupal Universe

Pages