Drupal Fire - Quick Roundup from important Drupal blogs and sites
The second week of February, Web Chefs Patrick and Flip visited San Francisco for the Forward 4 Web Technology Summit.
Yesterday, Development Seed hosted an Open Mapping Happy Hour at our office in Blagden Alley. We were delighted that over 100 open mappers and FedGISers braved the rain for some great conversation and scheming.
A few highlights of last night’s presentations include:
- Dale Kunce announced the launch of the new Missing Maps website featuring user pages and leaderboards for tracking OpenStreetMap edits.
- Jubal Harpster showed off POSM (Portable Open Street Map), a field-deployable OpenMapKit server.
- Derek Lieu spoke about Macrocosm, an OpenStreetMap clone for a governments or large institution to collaboratively manage map data
- Mikel Maron spoke about Mapbox Open Source Initiatives.
- Becky Chen announced the launch of Astro Digital’s new Digital Imagery Browser. (As seen in TechCrunch!)
- Kiwako Sakamoto talked about India Lights a tool to explore 20 years of night time imagery over India
- Krystal Wilson noted The Tauri Group’s new Report on Space Startups
Thanks to everyone who came out last night. We had an awesome time hanging out and talking open mapping. Thanks also to Mapbox for cohosting the event.
[Arnold Schwartzenegger voice] What is a CDO and what do they do? We’ll answer that and why it’s important to consider when planning your digital strategy.
«It depends». That may be the most used sentence when evaluating the correct technology for a generic challenge. Choosing the right technology is not a trivial problem. When the needs are poorly defined, or not defined at all, it may not even be possible to suggest a good solution. But even if the problem is broken down into simple technical challenges, there are multiple factors that influence the implementation of the solution. Budget constraints, internal knowledge, timelines, resourcing issues, corporate policies, integration with existing systems, technical feasibility, and a long etcetera will influence the final decision.
Leaving these other—extremely valid—considerations aside, let’s focus on the technical part. Often times you have different frameworks, applications, libraries, SAAS, … to provide a solution with different degrees of success. The sheer size of this list of options can complicate the decision. It is common practice to check what everyone else is doing. Doing market research is a good first step, but it should not determine the final decision.
Many times when a new technology arises, there is media hype. All of a sudden many computer science related blogs and news sites are flooded with posts explaining the merits of this groundbreaking new solution. Often times, these communications are very passionate and contagious, and that’s good! Their goal is to get other people to try the new tools. Information flows rapidly and allows people that are solving that particular problem to assess if they want to try a new direction.
This situation can result in what I call the shiny effect. I admit that I have been blinded by it in the past. I have spent large amounts of time learning frameworks that turned out to be great for only a narrow set of use cases. That has made me cautious about how I approach new tools, even though I still get excited about them.
One widely used argument is based on authority. (If all these multi-milion dollar organizations are using this solution for the exact same problem I have, then I should do the same. After all, they would never choose to implement it without careful analysis.) Resorting to the argument from authority will not always lead you to a reliable answer. The market trend will not always be accurate, there are multiple examples of that.
I remember how in a not so distant past, NoSQL databases were postulated by some people as the future, some people went so far as to say that SQL was dead. There was a time when Apache was to be completely usurped by other alternatives. The LAMP stack was supposed to be replaced by the MEAN stack, without leaving any trace. SOAP and RPC were to disappear because of REST, while REST is now irrelevant because of GraphQL. Also, at the present time no one was supposed to be using Basic Auth. It goes without saying that the opposite happened as well: prophecies of emerging solutions being doomed to disappear that were never fulfilled.
None of that happened. All the tools and technologies listed above have found a way to coexist and share the spectrum of solutions. Even when they seemed to overlap at the beginning, the emerging alternatives ended up being only a better solution for a subset of scenarios in most of the cases. That is a very big win for everyone. We now have two different tools that are very good at solving similar problems. Even if you can build a search backend in MySQL, you are probably going to have a better experience if you do it with Solr, for instance.
Disregarding well-proven technological solutions, with proliferous ecosystems, is dangerous. Failing to know new solutions that may prove to be better at solving your problem is just as dangerous. We really need to take the time to understand the problem that the new solution is fixing before jumping into it with both feet. Carey Flichel attributes this, amongst other causes, to boredom and lack of understanding in Why Developers Keep Making Bad Technology Choices.
It’s no surprise then that we want to try something new, even if we’ve adequately solved a problem before. We are natural puzzle solvers, and sometimes you just want to try a new puzzle.
Choosing the right solution often involves checking what everyone else is doing, and then analyzing the problem for yourself while taking all the options into consideration. You must not trust new solutions just because they're new anymore than you trust old solutions just because they're old. Instead, zero in on the problem to be solved and find the perfect solution regardless of the buzz. Keep every technology solution on the table until you understand the nuances of the problem space, and let that be your guiding light. Being aware of all the technologies involved, and knowing what’s the best choice, takes time and a lot of research. Many times this will require a software architect to guide you.
The new Astro Digital Imagery Browser launched today. It is incredibly easy to process satellite imagery for the places that you care about – today and into the future. Use the platform to track urban sprawl, detect illegal logging, or monitor the agricultural modernization project that you are about to kick off.
Fast and responsive. We use Mapbox Vector Tiles to quickly filter through hundreds of thousands of satellite metadata records right in your browser.
In this edition: Apps and extensions we love! Web Chef SANDCamp sessions! SXSW parties we’re hosting! And four new ways to become a Web Chef. Oh, my!
As you might know, I’ve been an elected At Large member of the Drupal Association board for the last two years. I’ve been chairing the Governance Committee. Some highlights of that work includes changes that implemented term limits for Board members, extension of the term for At Large member to two years, liaising with the community on issues they wanted to discuss, and a myriad of activities related to good governance.
It’s time for SANDCamp 2016, the annual Drupal camp! This year, Four Kitchens has a strong presence along with many other great people and organizations. Here’s a preview of the sessions we’re presenting this year.
We’re happy to welcome Ali Felski to the Development Seed family! Ali has led design teams at Sunlight Foundation and iStrategyLabs. She is going to help us improve design practices across Development Seed, including integrating UX and usability research into the fabric of each project.
A native Yooper, Ali is more recently a proud Washingtonian, appreciating the weather down here a little more. Over the past decade, Ali has brought good design to the far corners of Washington DC. She improved usability at the CIA, built the Sunlight Foundation’s design team from the ground up, and systematized usability research at iStrategyLabs, where she served as user experience director. Not to mention winning Best Personal Portfolio at SXSW. And while her dream job would be as a taste-tester of artisanal chocolate, she’s convinced us that transforming design through UX thinking is a close second.
Satellite imagery is among the most powerful open data. The Defense Meteorological Satellite Program captures images of the earth just after dusk every night–and it’s been doing so for over 20 years. Working with the World Bank and the University of Michigan, we’ve extracted and organized the data from every night for the past two decades in all of India’s 600,000 villages.
These six billion data points that power India Night Lights tell a rich story of India’s urban development and electrification from 1992-2012. The data is complex and hard to work with. But from the chaos emerges real insights, such as role of politics in service provision and the the impact of state funded electrification projects.
- Those checking our math at home, some villages have multiple readings per night.
How can you prepare for something you can’t prepare for? What happens after an out-of-the-blue paradigm shift? How can you strategize for a digital Black Swan?
What is it? Why should we care?
If fewer is better, why multiple aggregates?
Using the API
I like to register every script in my module as a library, and only add scripts using drupal_add_library (or the corresponding #attached method, see below). This way I’m clearly expressing any dependencies and in the case where my script is added in more than one place, the options I give the aggregation system are all in one place in case I need to make a change. It’s also a nice way to safely deprecate scripts. If all the dependencies are spelled out, you can be assured that removing a script won’t break things.
Instructions for using hook_library and drupal_add_library are well documented. However, one important thing to note is the third parameter to drupal_add_library, every_page, which is used to help optimize aggregation. At registration time in hook_library, you typically don’t know if a library will be used on every page request or not. Especially if you’re registering libraries for other module authors to use. This is why drupal_add_library has an every_page parameter. If you’re adding a library unconditionally on every page, be sure to set every_page parameter to TRUE so that Drupal will group your script into a stable aggregate with other scripts that are added to every page. We’ll discuss every_page in more detail below.
The possible values for scope are ‘header’ and ‘footer’. A scope of ‘header’ will output the script in the
tag, a scope of ‘footer’ will output the script just before the closing tag.
The ‘group’ option takes any integer as a value, although you should stick to the constants JS_LIBRARY, JS_DEFAULT and JS_THEME to avoid excessive numbers of aggregates and follow convention.
The ‘every_page’ option expects a boolean. This is a way to tell the aggregation system that your script is guaranteed to be included on every page. This flag is commonly overlooked, but is very important. Any scripts that are added on every page should be included in a stable aggregate group, one that is the same from page to page, such that the browser can make good use of caching. The every_page parameter is used for exactly that purpose. Miss out on using every_page and your script is apt to be lumped into a volatile aggregate that changes from page to page, forcing the browser to download your code anew on each page request.
What can go wrong
Potential number of aggregates
Misuse of Groups
Inline and External Scripts
Extra care should be employed when adding external and inline scripts. drupal_add_js and friends preserve order really well, such that they track the order the function is called for all the script added to the page and respect that. That’s all well and good. However, if an inline or external script is added between file scripts, it can split the aggregate. For example if you add 3 scripts: 2 file scripts and 1 external, all with the same scope, group, and every_page values, you might think that Drupal would aggregate the 2 file scripts and output 2 script tags total. However, if drupal_add_js gets called for the file, then external, then file, you end up getting two aggregates generated with the external script in between for a total of 3 script tags. This is probably not what you want. In this case it’s best to specify a high or low weight value for the external script so it sits at the top or bottom of the aggregate and doesn’t end up splitting it. The same goes for inline JS.
Scripts added in a different order
I alluded to this in the last one, but certain situations can arise where scripts get added to an aggregate in a different order from one request to the next. This can be for any number of reasons, but because Drupal is tracking the call order to drupal_add_js, you end up with a different order for the same scripts in a group and thus a different aggregate. Sadly the same code will sit in two aggregates on the server, with slightly different source order, but otherwise they could have been exactly equal, produced a single aggregate and thus cached by the browser from page to page. The solution in this case is to use weight values to ensure the same order within an aggregate from page to page. It’s not ideal, because you don’t want to have to set weights on every hook_library / drupal_add_js. I’d recommend handling it on a case by case basis.
Clearly, there is a lot that can go wrong, or at least end up taking you to a place that is sub-optimal. Considering all that we’ve covered, I’ve come up with a list of best practices to follow:
Always use the API, never ‘shoehorn’ scripts into the page
Use every_page when appropriate
The every_page option signals to the aggregation system that your script will be added to all pages on the site. If your script qualifies, make sure your setting every_page = TRUE. This puts your script into a "stable" aggregate that is cached and reused often.
Advanced CSS/JS Aggregation (AdvAgg) is a contributed module that replaces Drupal’s built in aggregation system with it’s own, making many improvements and adding additional features along the way. The way that you add scripts is the same, but behind the scenes, how the groups are generated and aggregated into their respective aggregates is different. AdvAgg also attempts to overcome many of the scenarios where core aggregation can go wrong that I listed above. I won’t cover all of what AdvAgg has to offer, it has an impressive scope that would warrant it’s own article or more. Instead I’ll touch on how it can solve some of the problems we listed above, as well as some other neat features.
Out of the box the core AdvAgg module supplies a handful of transparent backend improvements. One of those is stampede protection. After a code release with JS/CSS changes, there is a potential that multiple requests for the same page will all trigger the calculation and writing of the same new aggregates, duplicating work. On high traffic sites, this can lead to deadlocks which can be detrimental to performance. AdvAgg implements locking so that only the first process will perform the work of calculating and writing aggregates. Advagg also employs smarter caching strategies so the work of calculating and writing aggregates is done as infrequently as possible, and only when there is a change to the source files. These are nice improvements, but there is great power to behold in AdvAgg’s submodules.
AdvAgg comes with two submodules to enhance JS and CSS compression. They each provide a pluggable way to have a compressor library act on the each aggregate before it’s saved to disk. There are a few options for JS compression, JSqueeze is a great option that will work out of the box. However, if you have the flexibility on your server to install the JSMIN C extension, it’s slightly more performant.
AdvAgg Modifier is another sub module that ships with AdvAgg, and here is where we get into solving some of the problems we listed earlier. Let’s explore some of the more interesting options that are made available by AdvAgg Modifier:
Move JS to the footer
This week's Grammy Awards is one of the best examples of the high traffic events websites that Acquia is so well known for. This marks the fourth time we hosted the Grammys' website. We saw close to 5 million unique visitors requesting nearly 20 million pages on the day of the awards and the day after. From television's Emmys to Superbowl advertisers' sites, Acquia has earned its reputation for keeping their Drupal sites humming during the most crushing peaks of traffic.
These "super spikes" aren't always fun. For the developers building these sites to the producers updating each site during the event, nothing compares to the sinking feeling when a site fails when it is needed the most. During the recent Superbowl, one half-time performer lost her website (not on Drupal), giving fans the dreaded 503 Service Unavailable error message. According to CMSWire: "Her website was down well past midnight for those who wanted to try and score tickets for her tour, announced just after her halftime show performance". Yet for Bruno Mars' fans, his Acquia-based Drupal site kept rolling even as millions flooded his site during the half-time performance.
For the Grammys, we can plan ahead and expand their infrastructure prior to the event. This is easy thanks to Acquia Cloud's elastic platform capacity. Our technical account managers and support teams work with the producers at the Grammys to make sure the right infrastructure and configuration is in place. Specifically, we simulate award night traffic as best we can, and use load testing to prepare the infrastructure accordingly. If needed, we add additional server capacity during the event itself. Just prior to the event, Acquia takes a 360 degree look at the site to ensure that all of the stakeholders are aligned, whether internal to Acquia or external at a partner. We have technical staff on site during the event, and remote teams that provide around the clock coverage before and after the event.
Few people know what goes on behind the scenes during these super spikes, but the biggest source of pride is that our work is often invisible; our job well done means that our customer's best day, didn't turn into their worst day.
It is with great sadness that we learned last week that Richard Burford has passed away. This is a tragic loss for his family, for Acquia, the Drupal community, and the broader open source world. Richard was a Sr. Software Engineer at Acquia for three and a half years (I still remember him interviewing with me), and known as psynaptic in the Drupal community. Richard has been a member of the Drupal community for 9+ years. During that time, he contributed hundreds of patches across multiple projects, started a Drupal user group in his area and helped drive the Drupal community in the UK where he lived. Richard was a great person, a dedicated and hard-working colleague, a generous contributor to Drupal, and a friend. Richard was 36 years young with a wife and 3 children. He was the sole income earner for the family so a fundraising campaign has been started to help out his family during these difficult times; please consider contributing.
Firewalls are a tool that most web developers only deal with when sites are down or something is broken. Firewalls aren’t fun, and it’s easy to ignore them entirely on smaller projects.
Part of why firewalls are complicated is that what we think of as a "firewall" on a typical Linux or BSD server is responsible for much more than just blocking access to services. Firewalls (like iptables, nftables, or pf) manage filtering inbound and outbound traffic, network address translation (NAT), Quality of Service (QoS), and more. Most firewalls have an understandably complex configuration to support all of this functionality. Since firewalls are dealing with network traffic, it’s relatively easy to lock yourself out of a server by blocking SSH by mistake.
In the desktop operating system world, there has been great success in the "application" firewall paradigm. When I load a multiplayer game, I don’t care about the minutiae of ports and protocols - just that I want to allow that game to host a server. Windows, OS X, and Ubuntu all support application firewalls where applications describe what ports and protocols they need open. The user can then block access to those applications if they want.
Uncomplicated Firewall (ufw) is shipped by default with Ubuntu, but like OS X (and unlike Windows) it is not turned on automatically. With a few simple commands we can get it running, allow access to services like Apache, and even add custom services like MariaDB that don’t ship with a ufw profile. UFW is also available for other Linux distributions, though they may have their own preferred firewall tool.
Before you start
Locking yourself out of a system is a pain to deal with, whether it’s lugging a keyboard and monitor to your closet or opening a support ticket. Before testing out a firewall, make sure you have some way to get into the server should you lock yourself out. In my case, I’m using a LAMP vagrant box, so I can either attach the Virtualbox GUI with a console, or use vagrant destroy / vagrant up to start clean. With remote servers, console access is often available through a management web interface or a "recovery" SSH server like Linode’s Lish.
It’s good to run a scan on a server before you set up a firewall, so you know what is initially being exposed. Many services will bind to ‘localhost’ by default, so even though they are listening on a network port they can’t be accessed from external systems. I like to use nmap (which is available in every package manager) to run port scans.
$ nmap 192.168.0.110
Starting Nmap 6.40 ( http://nmap.org ) at 2015-09-02 13:16 EDT
Nmap scan report for trusty-lamp.lan (192.168.0.110)
Host is up (0.0045s latency).
Not shown: 996 closed ports
PORT STATE SERVICE
22/tcp open ssh
80/tcp open http
111/tcp open rpcbind
3306/tcp open mysql
Nmap done: 1 IP address (1 host up) scanned in 0.23 seconds
Listening on for SSH and HTTP connections makes sense, but we probably don’t need rpcbind (for NFS) or MySQL to be exposed.
Turning on the firewall
The first step is to tell UFW to allow SSH access:
$ sudo ufw app list
$ sudo ufw allow openssh
Rules updated (v6)
$ sudo ufw enable
Command may disrupt existing ssh connections. Proceed with operation (y|n)? y
Firewall is active and enabled on system startup
$ sudo ufw status
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
Test to make sure the SSH rule is working by opening a new terminal window and ssh’ing to your server. If it doesn’t work, run sudo ufw disable and see if you have some other firewall configuration that’s conflicting with UFW. Let’s scan our server again now that the firewall is up:
$ nmap 192.168.0.110
Starting Nmap 6.40 ( http://nmap.org ) at 2015-09-02 13:31 EDT
Note: Host seems down. If it is really up, but blocking our ping probes, try -Pn
Nmap done: 1 IP address (0 hosts up) scanned in 3.07 seconds
UFW is blocking pings by default. We need to run nmap with -Pn so it blindly checks ports.
$ nmap -Pn 192.168.0.110
Starting Nmap 6.40 ( http://nmap.org ) at 2015-09-02 13:32 EDT
Nmap scan report for trusty-lamp.lan (192.168.0.142)
Host is up (0.00070s latency).
Not shown: 999 filtered ports
PORT STATE SERVICE
22/tcp open ssh
Nmap done: 1 IP address (1 host up) scanned in 6.59 seconds
Excellent! We’ve blocked access to everything but SSH. Now, let’s open up Apache.
$ sudo ufw allow apache
Rule added (v6)
You should now be able to access Apache on port 80. If you need SSL, allow "apache secure" as well, or just use the “apache full” profile. You’ll need quotes around the application name because of the space.
To remove a rule, prefix the entire rule you created with "delete". To remove the Apache rule we just created, run sudo ufw delete allow apache.
UFW operates in a "default deny" mode, where incoming traffic is denied and outgoing traffic is allowed. To operate in a “default allow” mode, run sudo ufw default allow. After running this, perhaps you don’t want Apache to be able to listen for requests, and only want to allow access from localhost. Using ufw, we can deny access to the service:
$ sudo ufw deny apache
Rule updated (v6)
You can also use "reject" rules, which tell a client that the service is blocked. Deny forces the connection to timeout, not telling an attacker that a service exists. In general, you should always use deny rules over reject rules, and default deny over default allow.
Address and interface rules
UFW lets you add conditions to the application profiles it ships with. For example, say you are running Apache for an intranet, and have OpenVPN setup for employees to securely connect to the office network. If your office network is connected on eth1, and the VPN on tun0, you can grant access to both of those interfaces while denying access to the general public connected on eth0:
$ sudo ufw allow in on eth1 to any app apache
$ sudo ufw allow in on tun0 to any app apache
Replace from with on to use IP address ranges instead of interface names.
While UFW lets you work directly with ports and protocols, this can be complicated to read over time. Is it Varnish, Apache, or Nginx that’s running on port 8443? With custom application profiles, you can easily specify ports and protocols for your own custom applications, or those that don’t ship with UFW profiles.
Remember up above when we saw MySQL (well, MariaDB in this case) listening on port 3306? Let’s open that up for remote access.
Pull up a terminal and browse to /etc/ufw/applications.d. This directory contains simple INI files. For example, openssh-server contains:
title=Secure shell server, an rshd replacement
description=OpenSSH is a free implementation of the Secure Shell protocol.
We can create a mariadb profile ourselves to work with the database port.
title=MariaDB database server
description=MariaDB is a MySQL-compatible database server.
$ sudo ufw app list
$ sudo ufw allow from 192.168.0.0/24 to any app mariadb
You should now be able to access the database from any address on your local network.
Debugging and backup
Debugging firewall problems can be very difficult, but UFW has a simple logging framework that makes it easy to see why traffic is blocked. To turn on logging, start with sudo ufw logging medium. Logs will be written to /var/log/ufw.log. Here’s a UFW BLOCK line where Apache has not been allowed through the firewall:
Jan 5 18:14:50 trusty-lamp kernel: [ 3165.091697] [UFW BLOCK] IN=eth2 OUT= MAC=08:00:27:a1:a3:c5:00:1e:8c:e3:b6:38:08:00 SRC=192.168.0.54 DST=192.168.0.142 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=65499 DF PROTO=TCP SPT=41557 DPT=80 WINDOW=29200 RES=0x00 SYN URGP=0
From this, we can see all of the information about the source of the request as well as the destination. When you can’t access a service, this logging makes it easy to see if it’s the firewall or something else causing problems. High logging can use a large amount of disk space and IO, so when not debugging it’s recommended to set it to low or off.
Once you have everything configured to your liking, you might discover that there isn’t anything in /etc with your rules configured. That’s because ufw actually stores its rules in /lib/ufw. If you look at /lib/ufw/user.rules, you’ll see iptables configurations for everything you’ve set. In fact, UFW supports custom iptables rules too if you have one or two rules that are just too complex for UFW.
For server backups, make sure to include the /lib/ufw directory. I like to create a symlink from /etc/ufw/user-rules to /lib/ufw. That way, it’s easy to remember where on disk the rules are stored.
Controlling inbound traffic is a great first step, but controlling outbound traffic is better. For example, if your server doesn’t send email, you could prevent some hacks from being able to reach mail servers on port 25. If your server has many shell users, you can prevent them from running servers without being approved first. What other security tools are good for individual and small server deployments? Let me know in the comments!
There has been a lot of discussion around the future of the Drupal front end both on Drupal.org (#2645250, #2645666, #2651660, #2655556) and on my blog posts about the future of decoupled Drupal, why a standard framework in core is a good idea, and the process of evaluating frameworks. These all relate to my concept of "progressive decoupling", in which some portions of the page are handed over to client-side logic after Drupal renders the initial page (not to be confused with "full decoupling").
My blog posts have drawn a variety of reactions. Members of the Drupal community, including Lewis Nyman, Théodore Biadala and Campbell Vertesi, have written blog posts with their opinions, as well as Ed Faulkner of the Ember community. Last but not least, in response to my last blog post, Google changed Angular 2's license from Apache to MIT for better compatibility with Drupal. I read all the posts and comments with great interest and wanted to thank everyone for all the feedback; the open discussion around this is nothing short of amazing. This is exactly what I hoped for: community members from around the world brainstorming about the proposal based on their experience, because only with the combined constructive criticism will we arrive at the best solution possible.
Improving Drupal's user experience is a topic near and dear to my heart. Drupal's user experience challenges led to my invitation to Mark Boulton to redesign Drupal 7, the creation of the Spark initiative to improve the authoring experience for Drupal 8, and continued support for usability-related initiatives. In fact, the impetus behind progressive decoupling and adopting a client-side framework is the need to improve Drupal's user experience.
To iterate or to disrupt?
To date, much of our UX improvements have been based on an iterative process, meaning it converges on a more refined end state by removing problems in the current state. However, we also require disruptive thinking, which is about introducing entirely new ideas, for true innovation to happen. It's essentially removing all constraints and imagining what an ideal result would look like.
I think we need to recognize that while some of the documented usability problems coming out of the Drupal 8 usability study can be addressed by making incremental changes to Drupal's user experience (e.g. our terminology), other well-known usability problems most likely require a more disruptive approach (e.g. our complex mental model). I also believe that we must acknowledge that disruptive improvements are possibly more impactful in keeping Drupal relevant and widening Drupal's adoption.
At this point, to get ahead and lead, I believe we have to do both. We have to iterate and disrupt.
From inside-out to outside-in
Let's forget about Drupal for a second and observe the world around us. Think of all the web applications you use on a regular basis, and consider the interaction patterns you find in them. In popular applications like Slack, the user can perform any number of operations to edit preferences (such as color scheme) and modify content (such as in-place editing) without incurring a single full page refresh. Many elements of the page can be changed without the user's flow being interrupted. Another example is Trello, in which users can create new lists on the fly and then add cards to them without ever having to wait for a server response.
Contrast this with Drupal's approach, where any complex operation requires the user to have detailed prior knowledge about the system. In our current mental model, everything begins in the administration layer at the most granular level and requires an unmapped process of bottom-up assembly. A user has to make a content type, add fields, create some content, configure a view mode, build a view, and possibly make the view the front page. If each individual step is already this involved, consider how much more difficult it becomes to traverse them in the right order to finally see an end result. While very powerful, the problem is that Drupal's current model is "inside-out". This is why it would be disruptive to move Drupal towards an "outside-in" mental model. In this model, I should be able to start entering content, click anything on the page, seamlessly edit any aspect of its configuration in-place, and see the change take effect immediately.
Drupal 8's in-place editing feature is actually a good start at this; it enables the user to edit what they see without an interrupted workflow, with faster previews and without needing to find what thing it is before they can start editing.
Making it real with content modeling
Eight years ago in 2007, I wrote about a database product called DabbleDB. I shared my belief that it was important to move CCK and Views into Drupal's core and learn from DabbleDB's integrated approach. DabbleDB was acquired by Twitter in 2010 but you can still find an eight-year-old demo video on YouTube. While the focus of DabbleDB is different, and the UX is obsolete, there is still a lot we can learn from it today: (1) it shows a more integrated experience between content creation, content modeling, and creating views of content, (2) it takes more of an outside-in approach, (3) it uses a lot less intimidating terminology while offering very powerful capabilities, and (4) it uses a lot of in-place editing. At a minimum, DabbleDB could give us some inspiration for what a better, integrated content modeling experience could look like, with the caveat that the UX should be as effortless as possible to match modern standards.
This sort of vision was not possible in 2007 when CCK was a contributed module for Drupal 6. It still wasn't possible in Drupal 7 when Views existed as a separate contributed module. But now that both CCK and Views are in Drupal 8 core, we can finally start to think about how we can more deeply integrate the two user experiences. This kind of integration would be nontrivial but could dramatically simplify how Drupal works. This should be really exciting because so many people are attracted to Drupal exactly because of features like CCK and Views. Taking an integrated approach like DabbleDB, paired with a seamless and easy-to-use experience like Slack, Trello and Backand, is exactly the kind of disruptive thinking we should do.
We shouldn't limit ourselves to this one example, as there are a multitude of Drupal interfaces that could all benefit from both big and small changes. We all want to improve Drupal's user experience — and we have to. To do so, we have to constantly iterate and disrupt. I hope we can all collaborate on figuring out what that looks like.
December saw the 22nd installment of InVision’s webinar series on design and tech. This episode features our own Todd Ross Nienkerk presenting, with InVision’s Margaret Kelsey moderating. The following is a summary; to watch the whole presentation, along with the Q&A period, head over to InVison’s webinar recap site.
What is the future of the CMS?
We need to rethink how we manage, publish, and consume content.
Digital strategy is a business that needs to understand how ideas move between users— how new fads develop and propagate on the Internet— and looking at memes is one way we can begin to think about the transmission of culture and the spread of ideas.
Three years ago, I ended 2012 with a call to the Drupal community to Get Off the Island. Mainly I wanted to encourage Drupal developers to prepare themselves for the major changes coming in Drupal 8 by connecting with other PHP projects and with the broader community, and called on people to attend non-Drupal conferences in order to visit and learn from other communities.