Category Archives: Papers

[Thesis 1] Design Brief #4

IMG_6661

IMG_6662

Thesis Question

In digital systems, we access information through databases and software. Graphical interfaces make this process easy and almost invisible. But also might lead to false assumptions and misconceptions derived from the lack of knowledge of the systems we engage with.

This project is concerned with the relation between users and information mediated by software. It will explore how the process of gathering information from databases is shaped by the algorithm that drives it. How much do we know about systems we use daily? How aware are we about possible biases in data retrieval?

Research

Data, Database, and Information

This project aims to provoke reflection on our relationship with online sources of information. Though it might make use of online databases, and retrieve data from them, its main concern is information. Therefore, a distinction between the 3 terms is necessary.

In this project, the term database defines any system of information, not restricted to a digital format. As researcher Christiane Paul points out, “it is essentially a structured collection of data that stands in the tradition of ‘data containers’ such as a book, a library, an archive, or Wunderkammer.” [@paul_database_2007]

Data, as a consequence, is the content of the database, no matter its nature — image, text, or number. At last, information, often referred to as “processed data,” is understood in this project as a piece of content (data) that has been through a process of analysis or interpretation. New media artist and academic Mitchell Whitelaw defines data as “the raw material of information, its substrate”, whereas information “is the meaning derived from data in a particular context.” [@whitelaw_fcj-067_]

Information Arts

Given that the production and circulation of information is the main concern of this project, it is inserted in the field of information arts. This practice is defined by Christiane Paul and Jack Toolin as works that “explore or critically address the issues surrounding the processing and distribution of information by making information itself the medium and carrier of this exploration.” [@paul_encyclopedia_]

Considering that this work utilizes algorithms to process digital information, there is an overlap between information arts and digital, or new media, art. However, both the subject and medium of this project refer to information. Digital media, on the other hand, is only a part of the techniques here employed. Therefore, the category information art will be adopted instead of digital or new media art.

Methodology and Series Concept

This project is the first one of a series intended to provoke discussion on our use of online sources of information. The series is defined by a set of rules:

  • Each work will utilise an online database as a source for a print output. Database here is used in the broad sense of a set of data — from a simple html page to a social network stream, for example.
  • The print output must relate to the source conceptually, making a clear link between the two. It should also evoke familiar print forms — books, diaries, posters, guides, instruction manuals, etc.
  • The prints will be generated through an automated process.
  • Both online sources and print forms should not only have a clear link, but also be relevant today.

rules

Paul and Toolin present a diagram that outlines the general practice of information arts. In this model, the work can either be finished in the stage of information, serving as a new data source, or generate a database or visualization that will then be redistributed. The latter is the case of this project, which will reorganize the data source constructing a new database and then translate it into new forms of communication.

information_arts_diagram

The printed media is utilised in this project as a communication device, for its capacity to evoke cultural meaning embedded into its familiar forms. ABC books, for instance, are directly associated with education, children, language, no matter how diverse their formats and content.

The translation from digital to print also serves the purpose of creating an uncanny effect. Though both online content and printed media must be familiar, their displacement should generate an odd hybrid form. In addition, this process helps bring focus to the dataset, by removing it from its original UI. For example, a comparison of all Google’s autocomplete suggestions from A to Z would not be possible through the search box, because it retrieves results one at a time.

Implementation Prototype: Google’s Supersearch

“Google’s ABC,” the prototype described in the previous paper, did not allow users to easily compare the autocomplete suggestions accross different services — Google Web and Google Images, in that case. This new prototype, “Google’s Supersearch,” was developed in response to that, and allows users to compare autocomplete suggestions from 7 different Google services: Web, Youtube, Recipes, Products, News, Images, and Books.

However, this prototype is not conceptually linked to the series outlined in the methodology section. Most of the questions it tries to answer are technical: what Google services offer access to their autocomplete suggestions? Is it possible to retrieve their results without using any API?

In their paper “What do Prototypes Prototype?,“ Stephanie Houde and Charles Hill proposed a model for categorizing prototypes according to their functions. [@houdc_what_1997] Based on that, “Google’s Supersearch” is considered here an implementation prototype, since most of its questions are related to technical aspects of a larger project.

prototypes

“Google’s Supersearch” has a simple functionality. As users type in the search box, the page displays autocomplete suggestions — from each of the 7 different Google services — as lists into separate boxes. The options are clickable and redirect to a Google search page with the results for the selected term. For example, clicking on “chicken dance” under the Youtube list redirects to an youtube search page with all the results for “chicken dance.” This is achieved using the same Google Autocomplete undocumented “API” [@google] employed in “Google ABC.”

The interface is minimal and has no symbolic associations — the ABC metaphor was removed from it. The only text displayed besides the title and the results is “What does each Google service suggest you?”

The prototype was received by users as a rather practical and useful application, capable of aggregating different functions from Google in a single place.

supersearch

Role Prototype: Google ABC Book (Printed)

The design questions addressed by this prototype were related to the communicative aspects of the project. First, do people recognise it as an ABC book? Do they understand it is intended for children? Second, what do they think about the use of autocomplete suggestions an educational material? Do they perceive any critical message?

The prototype was made by storing the top autocomplete suggestions for each letter, using Google Images. After that, images for each term were searched and downloaded. Then the book was designed in Adobe Illustrator, using one of the many layouts for instant books described by Esther K. Smith in “How to Make Books.“ [@smith_how_2007] It was printed in a single sheet of paper in letter format. This layout does not require any binding or advanced tools besides scissors and glue.

IMG_6659

IMG_6658

The back of the book displays the time and date of that edition, along with the following lines: “Earlier and earlier kids are learning things online. So why not start as soon as possible? This ABC book curated by Google Images brings to your children the top searches for each letter! Kids will love it!”

The response to this prototype was positive, in general. Though some people did not understand exactly how the content was being generated, the critical association between the online source and the print form was clear for most users. Some even stated that giving kids those books could be an interesting experiment, since the celebrities present on it were well-know even for children in an early age. This was a positive outcome of the user test, since the intention of this project is not to point out negative aspects of technology, but instead provoke discussion and reflection.

IMG_6663

Integration Prototype: Google ABC Book (Online)

This prototype combines technical aspects from both previous ones. Its online presence utilizes the same undocumented API [@google] used to retrieve autocomplete suggestions from various Google services. The layout is set so that users can print the page, cut, fold, and make an instant book exactly like the one described in the previous section.

Accessing and manipulating the images was the most challenging part of the development of this prototype. As described in the previous paper, the new Google Custom Search [@custom] does not allow for customization of the results layout. Also, the Google script that receives and displays the images on the page runs on the server side. Therefore, it is not possible to manipulate the images with a script on the client side directly.

On previous prototypes, a quick fix was to use the official custom search and then scrape the results from the page to manipulate them. That would be laborious and not effective for the current prototype, in which 26 searches need to be performed in sequence.

The solution was to use a web scraper running on the server side. The way it works is by emulating a client visiting an url, in this case 26 Google Images pages, one for each term. Then a script grabs the first image from the results and send it back to the client. This was achieved using node.js, a server language based on javaScript syntax, with a combination of modules for web scraping and connecting client and server.

Once that problem was solved, the basic layout was designed, aimed to make the page ready for print. It was achieved through a combination of CSS rules and javaScript.

Some iterations at this point included sliders to change the colors of pictures and letters. This was an attempt to solve the contrast problems, but was discarded to avoid adding extra steps to the generative design process. Instead, the version sent to users displayed the letters in black, with a white outline.

google_abc_3

google_abc_4

google_abc_6

Technically, the prototype works as follows:

  • User selects service. A script runs through the 26 letters of the alphabet and retrieves autocomplete suggestions for each letter using a Google non-documented API. (See design brief #3 for details).
  • By hitting ok to confirm, the user triggers the instant book layout.
  • After the layout is complete, the page calls a web scraper, sending the suggestions stored in the first step — ‘Ariana Grande’ for A, ‘Beyonce’ for B, etc.
  • The scraper runs on node.js, some node modules, and phantomJS. It loads one Google Images page for each term sent by the client — ‘Ariana Grande,’ ‘Beyoncé,’ ‘cat,’ etc.
  • The scraper stores the address of the first image result and send it back to the client.
  • The image is appended to its position, below the initials.
  • If ‘print’ is hit, a new html page is created and the html from the book is sent to it.

technical_diagram

User Tests

The prototype was sent as a web link (http://54.204.173.108:3011) to 22 people, from which 10 sent a response. Only a few had a general idea of the concept, and most of them have not seem the printed prototype first. People were invited to participate through email or in person. Because different aspects of the project were being tested at once, no guidelines were given on what to comment.

The main areas on which this prototype focused were:

  • Implementation: check technical aspects, especially related to the web scraper, concerning speed and stability.
  • Role: questions related to the communicative aspects: is the “ABC Book” metaphor clear? Do people sense a critical tone on the message or not? Does it provoke discussion?

As for the look and feel, it was only partially addressed. The default html styling adopted is far from representing a definitive visual design for the project. Although, the basic wireframe, with the process split into 3 steps — source selection, printed form, and tutorial — was meant to be final.

Up until it was sent to the users, this prototype has been running on a local server, with no major technical problems. That was not the case when it was published on a remote server, though.The time for scraping and loading the images went from less than a minute to about 5 minutes. This had a drastic effect on the user experience. Some participants responded immediately expressing their concern and some ended up not taking part in the test at all.

Technical adjustments were made and the prototype was updated in less than 12 hours after the invitation. In the new version, it takes about 2 seconds to load the first image and 15 to 20 seconds to load all 26.

Once this problem was solved, the responses from the participants did not point out other technical problems. The information gathered after that can be grouped into 4 areas: communication, experience, layout, and instructions. Each one was also split here into feedback and suggestions — except for the first, for which users did not suggest any direct change.

  1. Communication
    • Feedback
      • It is clearly a critical project, not a commercial one.
      • The print aspect works well to capture a moment of something that is constantly changing.
      • It is not clear if the message is genie or ironic.
      • It could be utilized as something practical, to teach computer literacy.
  2. Experience
    • Feedback
      • The questionnaire format is confusing, because there is no interaction after the first step.
      • Instructions and options don’t leave much breathing room.
      • There is no option to personalize the results.
      • Some images are not recognizable, and there is no way to see what they refer to.
    • Suggestions
      • Expand the questionnaire/interaction to the net steps.
      • Allow for theme-based custom results — pop-singers, for instance.
      • Add option to change location, allowing for comparison between countries.
      • Add interaction to images, to display their description/
      • Include option to change each of the results.
  3. Layout
    • Feedback
      • It is hard to see the images, they are too small and the letters cover a big part of them.
      • The upside-down letters are not legible.
      • The layout is not visually exciting.
      • The layout only works for print.
    • Suggestions
      • Add option to control font size.
      • Create two separate layouts: one for the screen and another for print.
      • Show the final product somehow.
  4. Instructions
    • Feedback
      • It is hard to follow the instructions.
      • Some sentences are confusing.
    • Suggestions
      • Moving images or videos would be more clear.

Findings and Next Steps

Some of the responses to the communicative aspect of the project were contradicting. Most people understood the project as somewhat critical, though not sure about its precise intention. That is not a major concern, because leaving room for interpretation and discussion is part of the goal of this project.

As for the problem with the instructions, it can be easily addressed through the use of video or moving images, as suggested in one of the responses.

After the test, some of the users were presented to the printed version of the prototype – none of them printed it from the page. The responses were all positive, as opposed to their opinions about the web page layout. This was an unexpected an yet reasonable problem: since the page was developed as a simple bridge through the source and the print output, not much thought was put into the online experience. As a consequence, if users do not print it, their experience is limited and frustrating. Therefore, experience and layout are the areas that demand deeper and most immediate changes for the next iterations.

The next steps of this project will conduct two simultaneous investigations:

  • Online experience Define a better balance between the print and digital version. The latter should work independently from the former, besides being more interactive and having its own layout.
  • Series Investigate other online sources and print forms that fulfill the rules and conceptual goals of this project. A methodology still in test is to list the most popular and relevant online sources in use today and find their corresponding print forms.

Bibliography

“Custom Search — Google Developers.” 2014. Accessed October 17. https://developers.google.com/ custom-search/.

“Google Autocomplete ‘API’ Shreyas Chand.” 2014. Accessed October 17. http://shreyaschand.com/blog/ 2013/01/03/google-autocomplete-api/.

Houdc, Stephanie, and Charles Hill. 1997. “What Do Prototypes Prototype?” http://www.itu.dk/~malmborg/ Interaktionsdesign/Kompendie/Houde-Hill-1997.pdf.

Paul, Christiane. 2007. The Database as System and Cultural Form: Anatomies of Cultural Narratives. na. http://visualizinginfo.pbworks.com/f/db-system-culturalform.pdf.

Paul, Christiane, and Jack Toolin. 2014. “Encyclopedia of Library and Information Sciences.” Accessed November 26. http://www.tandfonline.com/doi/abs/10.1081/E-ELIS3-120043697.

Smith, Esther K. 2007. How to Make Books: Fold, Cut & Stitch Your Way to a One-of-a-Kind Book. 1St Edition edition. New York: Potter Craft.

Whitelaw, Mitchell. 2014. “FCJ-067 Art Against Information: Case Studies in Data Practice.” Accessed September 22. http://eleven.fibreculturejournal.org/fcj-067-art-against-information-case-studies-in-data-practice/.

[Thesis 1] Design Brief #3

Thesis Question

In digital systems, we access information through databases and software. Graphical interfaces make this process easy and almost invisible. But also might lead to false assumptions and misconceptions derived from the lack of knowledge of the systems we engage with.

This project is concerned with the relation between users and data mediated by software. It will explore how the process of gathering information from databases is shaped by the algorithm that drives it. How much do we know about systems we use daily? How aware are we about possible biases in data retrieval? How can computerised systems be transparent and yet user-friendly?

Research

Databases

In this project, the term database defines any system of information, not restricted to a digital format. As researcher Christiane Paul points out, “it is essentially a structured collection of data that stands in the tradition of ‘data containers’ such as a book, a library, an archive, or Wunderkammer.” [@paul_database_2007]

Though analog and digital databases share this basic definition, the latter allows for multiple ways of retrieving and filtering the data it contains. To do so, it should be organised according to a data model, that is, the structural format of how the data is stored: hierarchical, relational, etc. Also, it should comprise the software that provides access to the data containers. At last, according to Paul’s definition, another component of digital databases are the users, “who add a further level in understanding the data as information.” [@paul_database_2007]

Data versus Information

This distinction between data and information is important when analysing the impact of algorithms in the data retrieval. Presenting a critical retrospective of data artworks, Mitchell Whitelaw defines data as “the raw material of information, its substrate”, whereas information “is the meaning derived from data in a particular context.” [@whitelaw_fcj-067_] In Whitelaw’s opinion, any rearrangement of the former is directed towards constructing the latter.

Whitelaw’s criticism is particularly directed towards works where this distinction is blurred. Writing about We Feel Fine [@we] and The Dumpster [@dumpster:], he states that “these works rely on a long chain of signification: (reality); blog; data harvesting; data analysis; visualisation; interface. Yet they maintain a strangely naive sense of unmediated presentation.” [@whitelaw_fcj-067_] Therefore, the mediation between user and database in these works lead to a misconception of data and information.

Search Boxes

Although Whitelaw writes about those issues in the context of data art, they can be extended to our everyday use of the web. As he writes in the introduction of his article, “in digital, networked culture, we spend our lives engaged with data systems. (…) The web is increasingly a set of interfaces to datasets.” [@whitelaw_fcj-067_] Search boxes constitute a particularly interesting interface in the context of this prototype. Their path to the data is not explicit, as opposed to a set of hyperlinks that might give users a notion of how the data is structured. Also, its seeming neutrality, usually a single html input element followed by a button, reveal very little of the search engine itself.

An exception for that is the autocomplete feature, that tries to predict users’ input before they finish typing it. By doing so, it retrieves subsets of the data and, as a consequence, part of the algorithm behind it.

Google’s Autocomplete

Writing about how users interact with search boxes, Steve Krug tells a story from user tests he performed: “My favorite example is the people (…) who will type a site’s entire URL in the Yahoo search box every time they want to go there (…). If you ask them about it, it becomes clear that some of them think that Yahoo is the Internet, and that this is the way you use it.” [@krug_dont_2005] Krug’s book was originally published in 2000. 14 years later, users might not make the URL mistake anymore and Google has by far surpassed Yahoo! in popularity — approximately 1 billion unique monthly visitors versus 300 millions. [@top] However, the notion of a search engine as the web seems to prevail.

In an article for The Atlantic about Google’s autocomplete history, Megan Garber says that by typing a search query “you get a textual snapshot of humanity’s collective psyche.” The same notion is present in a series of reports from the website Mashable.com, with headlines like: “The United States of America According to Google Autocomplete,” and “Google Autocomplete Proves Summer Is Melting Our Brains.” [@google-2] Though the headlines are often humorous, they convey an idea of Google’s autocomplete as an accurate representation of our collective behaviour.

Some reports about the feature are more critical, though. An article on The Guardian about the UN Women’s campaign based on Google’s autocomplete states that Google’s autocomplete “isn’t always an entirely accurate reflection of the collective psyche. (…) What’s more, autocomplete also serves as a worrying indicator of how, in the interests of efficiency, we’re gradually letting technology complete our thought processes for us.” [@googles-1]

UN Women ad campaign uses Google’s autocomplete to point out sexism

According to Google’s own support page, the suggestions are “automatically generated by an algorithm without any human involvement, based on a number of objective factors.” The sense of neutrality on this sentence contradicts the following one, where the algorithm “automatically detects and excludes a small set of search terms. But it’s designed to reflect the diversity of our users’ searches and content on the web.” Therefore, the system is automated, but not free from human decisions.

Precedent

Image Atlas

Image Atlas [@about] was created by Aaron Swartz and Taryn Simon during a hackathon promoted by arts organization Rhizome. It takes a search term as an input, translates it to 17 different languages, and perform searches on Google Images of 17 different countries using the translated term.

Presenting the project, Aaron said that “these sort of neutral tools like Facebook and Google and so on, which claim to present an almost unmediated view of the world, through statistics and algorithms and analyses, in fact are programmed and are programming us. So we wanted to find a way to visualize that, to expose some of the value judgments that get made.” [@aaron] In other words, Aaron and Taryn’s project was intended to provoke reflection about Google’s Image Search engine.

Project Concept

This prototype will utilize Google’s Autocomplete suggestions as a means to illuminate how the search engine works. It is not intended to make a moral judgement on the subject. Instead, its purpose is to lead users to question their own knowledge of the system.

The Autocomplete engine provides different results depending on the Google service being used. To provide comparison, this prototype will allow access to the suggestions from Google Web and Google Images.

The user input will be restricted to one character only. The reasons for this constrain are:

  • lower complexity; a reduced number of suggestions provide less and clearer comparisons.
  • emphasis on the system’s response, not the user input.

Apart from the input restriction, the prototype will use a familiar interface: a regular search box followed by a button. The prototype must all provide room for iteration and entertainment, in order to engage users in the exploration of the suggestions. For that reason, it will use a visual output, instead of the regular list of search results.

Methodology

Database: Google Search and Autocomplete feature

The old Google Web Search API [@developers] and Google Images Search API [@google-1] have been deprecated as of 2010 and 2011, respectively. Though still functioning, they will stop operating soon and users are encouraged to replace them by the new Google Custom Search API [@custom]. I started by implementing the search box using the new engine. However, this API is actually intended to perform custom searches inside websites. The purpose of this project, though, was to search from the entire web using the Google Search engine. In the new API this is only possible through a laborious process: setting up a custom search with any given website, then editing your search to include results from the entire web, and removing the website you setup at first.

The main problem though was that the autocomplete did not seem to work for the “entire web” option. To sum up, it does not mimic the regular Google Search as I expected.

After some research, I found a Google Autocomplete “API,” that allows access to Google’s autocomplete results. However, it seems to be an engine developed for internal use, not really an API. There is no official documentation from Google, but a few experiments made by users who tried to document it. [@google-3] One of them wrote a jQuery plugin [@getimagedata] that allows for searches using various Google services, all through the same non-documented API.

Next, to realise the idea of mimicking Google’s original search, I integrated the autocomplete plugin with the official Google Custom Search bar. The resulting search element mixes custom scripts with Google’s regular service.

search_bar

First Iteration

The new custom search is based on simple html tags and does not require javaScript knowledge. However, that makes it also harder to customize. Results can only be displayed according to a predefined set of layouts. The first iteration of this prototype utilises a default layout. Clicking on the tabs, users can search on Google Web or Google Images. These UI elements are all presets from the API.

Google Custom Search default layout
Google Custom Search default layout

The radio buttons at the top of the page change the autocomplete suggestions. This turned out to be confusing, because the default tabs for the search output have similar labels. Also, the resulting layout, similar to a regular list of results, is not visually appealing.

image_search_layout

Second Iteration

The new Search API does not provide access to the results as a structured database, which would make it easy to customise the visual output. The solution was to apply an alternative method:

  • Scrape the html elements from the page, through the document object model.
  • Erase Google’s injected html elements.
  • Store the addresses of the pictures.

The idea for the visual output was to display the images forming the letter typed by the user. To do that through an automated process, it is necessary to have a grid to position the images. This could be done by either:

  • Writing a preset of coordinates for each pixel of each letter.
  • Rasterizing a given letter and storing the pixels coordinates.

Though less precise, the second method seemed faster to develop. It was prototyped using Processing first, in order to evaluate its feasibility and visual result.

pixelated_font

After that, this method was translated to the web page using P5.js [@p5.js], a javaScript library that utilizes Processing syntax. The process turned out to be laborious, though. There seem to be some flaws in the current implementation of the pixel manipulation methods in P5.

Besides, the images could not be loaded using the library, due to browser restrictions — see cross-origin resource sharing [@cross-origin] and P5’s developer Lauren McCarthy commenting on the issue [@loadimage]. The final prototype uses regular img tags, instead.

The script loops through a set of 20 images retrieved, positioning them according to the coordinates of the pixelated character. The following diagram sums up the technical process of this prototype:

diagram

User Tests

The second prototype was published online [@googles] and sent to two different users. Besides the search UI elements, the only information displayed was:

  • Title: “Google’s ABC”
  • Description: “The alphabet through Google’s autocomplete”
  • A question mark linking to a blog post [@thesis] that outlined the project and part of its process.

a

e

The feedback from the users can be separated in 3 categories, according to the problems they address: interactive, visual and communicative/conceptual.

  • Interactive
    • Lag when loading the results, made worse by the lack of feedback from the system.
    • Autocomplete not always responsive.
    • Radio buttons and autocomplete not working well together. Autocomplete options disappear after radio is selected.
  • Visual
    • Visual shapes not always forming recognisable letters.
    • Search box overlapping with letter, making some harder to recognise.
  • Communicative/conceptual
    • Message not clear. Comparing the results and critically thinking about them was only possible after reading the blog post.

Findings and Next Steps

The visual and interactive problems pointed out by the users tests were addressed in the last iteration of this prototype. Their solution was mostly technical and involved fixing some browser compatibility problems.

The communicative and conceptual problems, though, demand deeper changes for future prototypes. The interface is familiar enough to interact with, but its presumed neutrality does not seem to encourage comparison, on an immediate level, nor reflection as an ultimate goal. As for the entertaining purposes of this work, the lack of feedback is not conclusive.

Nevertheless, the technical aspects of this prototype point out to some interesting aspects of how the autocomplete works. An exploration of the letters for each service, Images and Web, reveals significant differences. The former is mainly composed of celebrities, as the latter seems dominated by companies. A more explicit comparison between the two could lead to more engaging interactions.

Because comparison and communication seem to be the main problems to address, these are possible next steps:

  • Compare different systems — Google services? Various search engines?
  • Iterate on ways to make comparisons visual.
  • Sketch visual solutions that to concisely communicate the goals.

Bibliography

“$.getImageData.” 2014. Accessed October 20. http://www.maxnov.com/getimagedata/.
“Aaron Swartz’s Art: What Does Your Google Image Search Look Like in Other Countries? Motherboard.”

2014. Accessed October 14. http://motherboard.vice.com/blog/aaron-swartzs-art.
“About — the Publishing Lab.” 2014. Accessed October 11. http://the-publishing-lab.com/about/.

“Cross-Origin Resource Sharing – Wikipedia, the Free Encyclopedia.” 2014. Accessed October 23. http: //en.wikipedia.org/wiki/Cross-origin_resource_sharing.

“Custom Search — Google Developers.” 2014. Accessed October 17. https://developers.google.com/ custom-search/.

“Developer’s Guide – Google Web Search API (Deprecated) — Google Developers.” 2014. Accessed October 19. https://developers.google.com/web-search/docs/.

“Google Autocomplete ‘API’ Shreyas Chand.” 2014. Accessed October 17. http://shreyaschand.com/blog/ 2013/01/03/google-autocomplete-api/.

“Google Image Search API (Deprecated) — Google Developers.” 2014. Accessed October 19. https:// developers.google.com/image-search/.

“Google Poetics.” 2014. Accessed October 22. http://www.googlepoetics.com/.

“Google’s ABC.” 2014. Accessed October 25. http://54.204.173.108/parsons/thesis_1/google_alphabet/.

“Google’s Autocomplete Spells Out Our Darkest Thoughts Arwa Mahdawi Comment Is Free The- guardian.com.” 2014. Accessed October 22. http://www.theguardian.com/commentisfree/2013/oct/22/ google-autocomplete-un-women-ad-discrimination-algorithms.

Krug, Steve. 2005. Don’t Make Me Think: A Common Sense Approach to Web Usability, 2nd Edition. 2nd edition. Berkeley, Calif: New Riders.

“loadImage() Can No Longer Specify Extension with Second Parameter. · Issue #171 · Lmccart/p5.js.” 2014. Accessed October 23. https://github.com/lmccart/p5.js/issues/171.

“P5.js.” 2014. Accessed October 23. http://p5js.org/.
Paul, Christiane. 2007. The Database as System and Cultural Form: Anatomies of Cultural Narratives. na.

http://visualizinginfo.pbworks.com/f/db-system-culturalform.pdf. 12

“The Dumpster: A Visualization of Romantic Breakups for 2005.” 2014. Accessed October 22. http: //artport.whitney.org/commissions/thedumpster/.

“[Thesis 1] Google ABC – Technical Module Prototype Gabriel.” 2014. Accessed October 25. http:// gabrielmfadt.wordpress.com/2014/10/20/thesis-1-google-abc-technical-module-prototype/.

“Top 15 Most Popular Search Engines October 2014.” 2014. Accessed October 22. http://www.ebizmba.com/ articles/search-engines.

“We Feel Fine / by Jonathan Harris and Sep Kamvar.” 2014. Accessed October 22. http://wefeelfine.org/. Whitelaw, Mitchell. 2014. “FCJ-067 Art Against Information: Case Studies in Data Practice.” Accessed

September 22. http://eleven.fibreculturejournal.org/fcj-067-art-against-information-case-studies-in-data-practice/.

[Thesis 1] Design Brief #2

Thesis Question

In spite of current improvements in technology, data visualization continues to use techniques that date to its origins. Digital interfaces might have changed the way we interact with data, making it possible to filter, zoom-in, and access details of it. However, the basic principles to display data are still based on reductions to geometric shapes, like rectangles, circles, and lines.

When working with cultural data, this gap between representation and artifact might be particularly problematic. Does a bar chart of the most used words of a book reveal more than the book itself? Can a pie chart of the colors of a painting tell us more about a painting than the actual image? Or is it possible to balance both representations? If so, can modern technologies create new visualization techniques to bridge this gap between representation and artifact?

Research

In his paper “Visualizing Vertov,” [@manovich_visualizing_2013] Lev Manovich analyses the work of Russian filmmaker Dziga Vertov using methods from his Software Studies Initiative. The research intends to show alternative ways to visualize media collections and focuses on the movies The Eleventh Year (1928) and Man With a Moving Camera (1929). It utilizes digital copies of Vertov’s movies provided by the Austrian Film Museum, in Vienna. They are the media source from which movie frames are taken, visualized and analysed, using custom software. Also, they provide metadata for the research: frame numbers, shot properties and manual annotations. A secondary source utilised by Manovich is Cinemetrics, a crowdsourced database of shot lengths from more than 10000 movies.

As for the visual representations, traditional techniques, like bars and scatterplots, are complemented by Manovich’s “direct visualization“ approach. In this method, “the data is reorganized into a new visual representation that preserves its original form.” [@manovich_what_2010]

Manovich describes his own method for analysis as analogous to how Google Earth operates: from a “bird’s eye” view of a large collection of movies to a zoom into the details of a single frame. The summary of those steps is as follows:

vertov_01

  • Panorama
    • ‘Birds-eye’ view: 20th Century movies compared by ASL (average shot length). Technique: scatterplot.
    • Timeline of mean shot length of all Russian films in the Cinemetrics database. Technique: line chart.
    • Movies from Vertov and Eisenstein compared to other 20th century movies. Technique: scatterplot.

vertov_02

  • Shot length
    • The Eleventh Year and Man With a Moving Camera compared by shot length. Technique: bars, bubbles, and rectangles.
    • Zoom-in of the same visualisations.

vertov_03

  • Shot
    • Each of 654 shots in The Eleventh Year, represented by its second frame. Technique: direct visualization.
    • Shots rearranged based on content (close-ups of faces). Technique: direct visualization.
    • Shots and their length. Technique: direct visualization and bar chart.

vertov_04

  • Frame
    • First and last frames from each shot compared. Technique: direct visualization.
    • Average amount of visual change in each shot. Technique: direct visualization (2nd frame) and bar chart.
    • Average amount of visual change in each shot. Technique: direct visualization (juxtaposition) and bar chart.
    • Frames organised by visual property. Technique: direct visualization.

Also, Manovich uses contextual notes to draw conclusions from the visualisations. His findings are often compared to facts from the history of cinema, Vertov’s early manifestos, and previous studies, confirming or contradicting them.

Project Concept

This prototype will compare two movies utilising some of the methods from “Visualizing Vertov.” It will combine traditional visualization techniques — charts using 2d primitives — and the “direct visualization” approach.

As for the data, it will use a specific component from films: sound. Because Manovich’s research relies largely on visual artefacts, using sound in this prototype might reveal limitations of his method or point out to new ways of applying it.

Besides, the Cinemetrics project focuses exclusively on a single aspect of movies: ”In verse studies, scholars count syllables, feet and stresses; in film studies, we time shots.” [@cinemetrics] This approach seem to underestimate other quantifiable data that could be used in movie studies.

In spite of using sound instead of time or visuals, this prototype will keep the analytical aspect of Cinemetrics and “Visualizing Vertov.” Then, it will draw conclusions on this approach compared to the supercut method.

To sum up, these are the questions raised by this prototype:

  • Is it possible to apply the “direct visualization“ technique to a non-visual artifact?
  • Which patterns and insights can a sound analysis of a movie reveal as compared to a visual analysis?
  • What are the results of an analytical method as compared to the supercut technique?

Research Methodology

Media

All levels of analysis in “Visualizing Vertov” — movies, shot lengths, shots, and frames — utilise comparisons.This device is largely employed in traditional data visualization, and seems to be even more useful for cultural artefacts. In Vertov’s case, for instance, the shot length measurement would not provide any insight if it was not compared to the Russian avant-garde or the 20th Century movies in general.

Following the same logics, this prototype takes an European movie, Wings of Desire (Der Himmel über Berlin, 1987), by German filmmaker Win Wenders, and its American remake, City of Angels (1998), directed by Brad Silberling.

Data

The following diagram shows the development steps for this prototype:

methodology_diagram

The digital copies of the movies utilised in the first step were not high quality ones. Also, the process by which the data was gathered do not preserve a high resolution sample. Those are limitations of this prototype, which focused on a rapid technique to extract and compare sound data. They will affect the visualization decisions as well.

Visualization

The data exported as XML was read and visualised using D3, a javaScript library for data visualization. D3 provides a fast and reliable way to parse and represent large amounts of data in web documents. Web pages are also able to natively embed media such as sound. Those are the reasons why the final result of this prototype is a web page.

First Iteration

The first iteration of this prototype is a simple area chart depicting the sound variations from the movies. Because of the low quality of the sources, it utilizes a relative scale for each movie: the higher value of each film is represented as the higher point of each scale, and all other values are relative to that.

For this reason, the peaks from each movie might differ in absolute value of decibels. In conclusion, this parameter should not be used for comparison.

prototype_01

Some visual disparities seem to appear in this first iteration: longer areas with high volume in Wings of Desire versus constant medium-volume ones in City of Angels.

However, the sound volume does not seem to provide many insights. Are these variations due to background music? Which patterns represent dialogues? The representation is so highly encoded that leaves no way to answer these questions.

Second Iteration

In order to add some more clarity, the second prototype includes visual marks to represent the parts of the movies when dialogues occur. A computational method to differentiate music from speech would be laborious and not reliable. The solution was to use part of the first prototype developed for this project, which parses subtitles files into different formats. The software generated JSON files with timestamps of the subtitles. Like the sound data, they were read and visualised using D3.

prototype_02

This new iteration seems to shed some light on how the movies compare. While City of Angels shows a constant use of dialogues, the subtitle marks in Wings of Desire have long blanks in several parts of the movie.

prototype_02a

The presence of the red marks also help us understand the sound representation. By comparing the two, it is possible to see that the alternating medium-low volume represents dialogues, while the constant and higher areas might indicate background music.

prototype_02b

Even though this iteration offers more insights about the movies, most of them are not yet verifiable. The tool does not let us access the sound itself.

Third Iteration

The last iteration of this prototype embeds the sound extracted from the movies into the page. It is controlled by a slider that matches the horizontal dimension of the charts.

This final representation is analogous to the shot length one in “Visualizing Vertov.” It combines a 2D chart with the artefact itself, trying to bridge the gap between the two.

prototype_03

Findings and Next Steps

Though the third iteration of the prototype includes the sound, it does not achieve the same results as the display of frames in “Visualizing Vertov.” The access to the sound is mediated through a GUI element, which creates an extra step. On one hand, the overview of the “media collection” (sound fragments) is only accessible through the chart. And on the other hand, the access to the sound through the player does not provide an overview, since it is not possible to listen to all the sound fragments at the same time. As opposed to what happens in Manovich’s visualizations, those two representations were not combined.

Nevertheless, the sound analysis does seem to provide some useful insights for movie studies. Even though this rough prototype relies on low-res sources and the support of subtitle files, yet it reveals some interesting patterns. An analysis of other sound components might include some more interesting findings.

At last, this prototype shows a radically different approach compared to the supercut technique. The analytical method of translating time to space and reducing sound to visuals have a very less entertaining result. In conclusion, the next steps for this project are:

  • Find out the communication purposes of it — data visualization as an analytical tool? A media? A language?
  • Continue the explorations of media collections using a “direct visualization” approach.
  • Expand this approach, as much as possible, to media and representations not yet explored by the Software Studies Initiative.

Bibliography

“Cinemetrics – About.” 2014. Accessed October 10. http://www.cinemetrics.lv/ index.php.

Manovich, Lev. 2010. “What Is Visualization?” Paj:The Journal of the Initiative for Digital Humanities, Media, and Culture 2 (1). https://journals.tdl.org/paj/ index.php/paj/article/view/19.

———. 2013. “Visualizing Vertov.” Russian Journal of Communication 5 (1): 44–55. doi:10.1080/19409419.2013.775546.