ART/CULTURE

Digital Archeologist

Ben Fino-Radin Interview

Interview & Text: Christopher Schreck / Videography: Saul Metnick / Photography: Ports Bishop / Edit: Takeshi Fukunaga

   Something that comes up a lot in my conversations with artists is the importance of perceiving and utilizing what Dave Hickey once termed a “useable past” – the basic idea being that in pushing their work into fresh territory, artists can look to the figures and objects of art history as prospective allies, readily available to be consulted, cited, intermixed, and (mis)appropriated in the hopes of addressing present concerns. It’s an approach centered on the belief that for author and viewer alike, the experience of art is enhanced by our being able to contextualize works within a field of precedents. It also embraces the notion that while circumstances and authorial intent might be obscured over time, the works themselves remain evergreen, untethered to particulars and ripe for rediscovery. Though each extant artwork is subjected to the whims of fashion, even the most presently disfavored item remains viable as a potential resource for some future conversation; though an object may be deemed out of style, it need never be considered obsolete.
   Of course, in order to engage with history this way, one must be assured access to works of art in their most natural state, whether it be as a preserved artifact or some form of purposeful documentation – and it’s on this point that a growing number of artists face a very real dilemma. For while it’s one thing to deny obsolescence as a function of fashion, artists working with digital materials must contend with the fact that as technology advances, the programs and platforms that support their art will eventually become inoperable – thus rendering many of the works themselves literally obsolete. What’s more, the rate of this forced antiquation has grown exponentially over the years; as each wave of new technologies enjoys an increasingly shorter lifespan, so too do the artworks that rely on those technologies as technical, conceptual, and aesthetic frameworks. In terms of retaining digital art’s useable past, then, the implications here are grave: with works falling ever more quickly into obsolescence, entire threads of art historical precedence can effectively vanish, inaccessible to succeeding generations and absent from future discussions.
   In response to these challenges, there has emerged in recent years a small but tight-knit community of practitioners dedicated to establishing open, long-term access to technology-based works of art and design. Fusing traditional conservation practices with inventive technical strategies, the members of this nascent field find themselves tasked with lending some sense of permanence to an environment in constant flux, developing policies and procedures geared at restoring items once lost to technological obsolescence – and ensuring that the same fate doesn’t befall works being produced today.
   Among the more prominent voices within this burgeoning conversation has been NYC-based conservator Ben Fido-Radin. A self-described “media archeologist, archivist, and conservator of born-digital works,” Fino-Radin has spent the past few years splitting his time between developing a digital cataloging system for MoMA and maintaining Rhizome’s ArtBase archive of digital artwork. (He since left Rhizome to focus on his tasks at MoMA full-time.) His practice encompasses a variety of activities, ranging from the painstaking documentation of contemporary artworks to salvaging content which lay dormant in outmoded materials – a skill which, at the time of this writing, had most recently led to his involvement in the New Museum’s ”XFR STN”, a media-archiving project/exhibition offering artists the chance to digitize and display items otherwise confined to archaic formats.
   Last August, , visiting him at his lab at MoMA Queens before traveling together to New Museum to walk through the exhibition. Articulate, enthusiastic, and eager to make relatable what can at times seem a prohibitively technical conversation, Ben spoke at length on a range of topics, offering considerable insight into the ideas and concerns that drive his practice.

I thought we could start by talking a bit about ArtBase. First of all, what kind of criteria do you use in deciding which works to add to the archive? Who makes those decisions?
   I’ve generally been the one who makes those decisions, but that’s mostly been by default, since I’m the only staff member at Rhizome specifically dedicated to the archive. Of course, there’s also been input from various curatorial fellows, or from people like Lauren Cornell [former executive director at Rhizome, currently a full-time curator at the New Museum], any of whom might suggest an artist worth looking into. But apart from that, it’s hard to say why we collect what we collect. I just think it’s like any collection development practice: it’s about looking at the broader collection, seeing where the weaknesses are and seeing how we might strengthen what’s already there. So a lot of what I’ve been doing is looking at the early ‘90s, looking at things that weren’t just “net art” – things like the (http://en.wikipedia.org/wiki/The_Thing_(art_project) THING bulletin board, which established in 1991. That’s a case where there’s this crucial piece of history whose story really hasn’t been told outside of a small circle – and it may not be for some time. But we wanted to make sure that it was preserved, so that at some point it could be rediscovered and understood a bit more.
Most museums’ acquisition processes involve both accepting donations and making purchases. Is the same true for Rhizome? Are you buying any of these works from the artists or their collectors?
   No. In adding work to the archive, the connotation really isn’t that we want to collect an artist’s work, per se – it’s usually more that we have a relationship with that artist, and we want to hold on to the work and make sure it’s safe. We tend to think of the collecting practice as one in which these artists are donating a preservation master to Rhizome. In doing so, they’re ensured a wider accessibility to their work, of course, but the artists get the added value of knowing that in 50 years, they can always come back to us and get a copy. So if a hard drive they’ve sold to a collector dies, for instance, they can come back to us and we can retrieve the preservation master for them. So while we don’t purchase works from these artists, we do feel that we’re providing a specific service: we’re offering preservation resources for free and providing access to the materials for as long as possible.
Rhizome’s mission statement for ArtBase speaks to the archive’s role as a resource for academic research. Obviously, part of that is giving scholars the opportunity to experience works intact and as intended, but equally useful is the metadata that accompanies each item’s profile. Along those lines, I’d be interested to hear more about how these works are being cataloged once they’re acquired.
   Well, I think the essential challenge with cataloging digital works is that with many of them, we’re dealing with multiple variations – oftentimes, you’ll not only have different versions of a given work, but also different instantiations of those versions. So let’s say we’re looking at Sim City 2000 – there’s the original, the original for Mac, for DOS, and so on, but each of those might have also had deluxe editions or some other version. And since we’re typically working directly with the creators of these items, it can go even deeper, including versions that weren’t released publicly – bug fixes, for example. Each of those is a digital object in its own right.
   Now, in the collections management database at MoMA, none of that information is represented. They work with (http://www.gallerysystems.com/tms) TMS [The Museum System management software], which virtually every museum uses to keep track of their holdings. With TMS, as with 99.9% of collections practices, you don’t really care about offering a granular look at the artwork. But as a conservator, that’s precisely what I care about, and so that’s really what we’re trying address now with the DRMC system.
How is the DRMC set up?
   OK, so the DRMC, the Digital Repository for Museum Collections, is essentially about collecting all of that carefully detailed information I mentioned and placing it into a coherent framework. We’re working on developing a system that allows us to upload those various materials in a browser-based interface and organize them on a dashboard, where those various relationships are established and integrated into an archival package. That way, we get a much clearer sense of the item, what’s needed to run it, its relationship to other works, and so on.
   The thing is, we’re talking about a digital archive, a digital collection of born-digital materials, so to present it merely as some sort of index card with a photo and some basic metadata is just ridiculous. If you’re working with something that’s natively digital, you should be providing instant access to the actual work in a really rich way. So we’ve been experimenting with different approaches. If it’s a web-based work, for instance, we’ve been looking into loading the piece into an IFrame, so that you’re seeing the work functioning with some kind of overlay. Or if it’s an old webpage with animated GIFs, those elements will be moving, as opposed to the documentation being a simple screenshot.
   These ideas have been in place for a long time, but they’ve never been fully realized. So with this new system, we’re really building, for the first time, a repository specific to the broader needs of a museum, which we’re very excited about – especially because our colleagues at SFMoMA and Tate are looking at similar systems. So it seems that in the near future, there could some inter-operable standards shared between these institutions, which is really great.
So you’re actively in contact with your counterparts at other institutions?
   Absolutely. For one thing, at MoMA, we’re part of a consortium called Matters in Media Art, which includes us, SFMoMA, and the Tate. It was originally formed because the New Art Trust divided their collection between the three museums, and that process involved a symposium that brought together everyone that had a stake in some part of the lifecycle of those works, whether it be the museum’s IT department, conservation, care research, or whatever else. That went really well, and so we’re always talking to those people – especially since we’re starting to use the same tools and can compare notes.
   Actually, it sounds weird, but Twitter is really where a lot of the communication between myself and other practitioners takes place. That’s where I met Mark Matienzo (https://), for instance, and he’s someone I’m constantly talking to. It’s really proven useful, since what we do is often so bleeding edge; with some of the issues we’re discussing, you could very well be the first person to have ever encountered it. So I’ve found that it’s really helpful to be as open and public about my work as possible, because there’s definitely this broader community that’s eager to share its research with one another.
In your experience, is it common for digital artists to consult with technical designers or other web professionals while producing their works? I’d imagine that some of these issues with longevity could be predicted, if not avoided, in doing so.
   Well, yes and no. I think that some do work with outside professionals, but even in those cases, it usually doesn’t make much difference in terms of preservation. Take Rafael Rozendaal, for example: He doesn’t really code anything. He may occasionally experiment with Flash on his own, but for the most part, he’ll sketch out the work and hand it off to a friend who does the coding for him. Then they’ll have a back and forth, making decisions together. So for him, it really does become a collaborative process. But then there’s someone like Jonas Lund, who is incredibly technical – he’s an amazing programmer, he does it professionally, and a lot of his work actually has preservation built into it. He’s already considered these long-term issues in his design. We’re collecting a lot of his work at Rhizome, including this series of works where we’re able to peek in on his browsing, whether that means seeing the URL that he’s currently at, or even seeing a picture of his browser live, in real time. What we did for those works is we forked the application, so it’s writing to the database that powers what the viewer’s seeing, but also to a database in our own archive, so that it’s on our infrastructure.
   I don’t think most artists are consulting professionals for the expressed purpose of addressing longevity. But at Rhizome, we do regularly have artists asking us what they can do in their general practice to ensure that things will last longer. And there are certain best practices in digital preservation that they can follow – controlling who has access to your repository, for instance. But when it comes to actual formats, there’s really no single answer – we have to take it on a case-by-case basis.
From what you’re saying, it sounds like beyond some basic considerations, it’s really out of the artists’ hands.
   Yeah. I mean, I think the essential problem is really one of storage. The simple fact is that there is no personal type of storage for the individual that is eternal. Whereas paper, paintings, or sculptures would theoretically last until they’re collected. With digital work, I think it requires a few different things, chief among them being the need to be a bit freer with your work, allowing it to be distributed. That’s an important part of archiving these works, because otherwise, it’s simply not sustainable. I mean, we’re in the lucky position here [at MoMA] of having a pretty good budget – for any given project, we can go on eBay and pick out the cream of the crop, the really expensive items, in perfect condition, and use them in our process. That’s not something most individuals are able to do. And even then, it’s difficult to get everything together and actually go through the process.
What would say are the most challenging or pressing formats you’re working with at the moment?
   The real ticking time bombs are definitely the complex installations that were powered by computers, because with those, it’s not just software – you can’t just do a disk image and be done with it. It’s a much more involved process, with all of the same issues of longevity still at work. A good example would be a piece called “Lovers” by Teiji Furuhashi. It’s a very complex piece, with various interacting elements – all powered by computers that were made in 1994. So what we’ve had to do is capture each constituent part on its own. That means capturing the laser discs, digitizing the 35mm slides, imaging the hard drives of the computers, and, most importantly, having to essentially break apart his software, figure out the logic at work, and rebuild it so that we can control the digital aspects through entirely new software, something a bit more considered from an archival standpoint.
How important is exhibition documentation to that kind of project?
   Documentation is a crucial part of the process – especially with an immersive media installation with projections, where there are always subtleties that are specific to each iteration. At MoMA, we often do walkthroughs of installations. If there were significant changes made [from previous showings of that work], we’ll walk through the gallery and talk with the curator, have him or her explain why different decisions were made, and even interview the artists as well. All of that material gets added to the repository. So basically, for a given record in the DRMC, you end up with the artists’ materials, the work’s dependencies – what it needs in order to run correctly – and various elements of conservation documentation – videos, images, and texts that provide evidence of the work and inform future conservation treatments.
So much of what you do is geared towards achieving technical accuracy, but it does seem like an essential part understanding some of these works, particularly the browser-based projects, is having a sense of how people interacted with them in context. As a conservator, how important – and how difficult – would you say it is to communicate that more human narrative?
   It’s incredibly important. There’s always a human connection to conservation – it’s the bedrock of what we do – so retaining a sense of the work’s “social life” is first and foremost. Especially with internet art, because the meaning of the work really can’t be separated from the technology and social context surrounding it. But the same is true for traditional artworks. An interesting recent example would be the Jackson Pollock restoration project that went on this summer at MoMA’s conservation lab, (?v=7beFNpp4FxY) where they were able to prove, through archival and chemical evidence, that there were layers of paint on the canvas that had been applied after the artist’s death. So the question became: if the painting has been this way for nearly 50 years now – if that’s the version of the work that’s been looked at, photographed, written about, and criticized – do we leave it? Or do we remove those layers so as to present the work as the artist originally intended? These sorts of considerations exist across the board in conservation practices.
That to me is one of the most interesting things about conservation – that in preserving a given work for posterity, one’s often forced to make these types of decisions that border on the ethical, even moralistic.
   Absolutely. Artifacts are always seen through multiple perspectives over time, they’re constantly reinterpreted, and conservation practices are always changing as well. I mean, when you look at the history of conservation, it’s just incredible. There was a time where, if a sculpture had pieces missing, the conservator would actually “repair” it by filling in those gaps themselves. That has since given way to the cult of the fragment, where you show exactly what’s there and nothing more. So today, if you visit the Met, they’ll have ruins on display, parts of buildings or altarpieces with portions missing. Both approaches definitely have ethical and moral implications. Fortunately, when it comes to born-digital restoration, we can have it both ways, since our restored versions are always just copies of the original object. Nothing is permanent.
    You know, with digital works, we talk about retaining authentic viewing experiences, but that idea applies to traditional artworks as well. There are a lot of subtleties in terms of context – the way paintings and sculptures were originally seen – that don’t often occur to people outside of the conservation field. For example, when we look today at a Renaissance painting, we’re looking at a work that, at the time of its creation, was never seen under unnatural light. Perhaps the painting was displayed in a room that never even saw daylight – perhaps it was hung in a personal library and only seen in candlelight. That’s something that we as conservators always have to consider. My boss Jim Coddington [Chief Conservator, MoMA] was telling me recently that in restoring a work from that era, when he’s removing layers of grime or yellowing, he never takes it all the way. To do so would allow us to see the painting’s colors in a way that they were never intended to be seen, simply because it wasn’t possible at the time that they could be seen that way. So he intentionally leaves a bit of that layered grime there, in order to preserve the experience of seeing the work as it would have been seen at the time.
Sort of along those lines: I’ve been reading texts on literary translation lately, and in talking with you, I’m struck by the parallels between that pursuit and conservation – in both cases, you’re ensuring access to works deemed worthy of broader attention; you’re trying to convert content into another format without loss of fidelity; you’re negotiating a result somewhere between naturalizing a work and retaining its original character. All of which brings up these questions of responsibility and license, as you were saying, but which also requires a real familiarity with the artists and their work, so that your decisions might reflect their intentions.
   Exactly. Like, part of what we’re trying to do with our archive at Rhizome is establish a way of capturing the preferred rendering environment for each piece. So if it’s Alexei Shulgin’s Form Art (http://artsy.net/artwork/alexei-shulgin-form-art), for instance, we might specify Netscape 3 as the preferred viewing environment. That doesn’t mean one requires that browser in order to run the software; we’re just saying that it’s preferred, as it most closely reflects the artist’s intention for the piece. When Rafael Rozendaal donated his entire body of work from the past twelve years to Rhizome, we conducted a series of interviews with him in which we had him sit down with a laptop and interact with every single piece, talking about it as he went. Throughout that process, we were screen-capturing what he was doing, and superimposing over that a webcam feed of his face as he spoke. The result of that was all of this great evidence, where he’s talking about the visual qualities, the look and feel of the work, and that’s really what’s important. In the end, it doesn’t matter if something needs to be run in an emulator, or if we need to write some sort of Flash plug-in replacement once Flash is entirely obsolete; the core, crucial component is that subtle look and feel. It’s about hearing the artist’s side of story.
   But at the same time, in conversation practices, the artist’s wishes aren’t always primary – they’re actually just one piece of the puzzle. The fact is that an artist’s sense of the work often changes over time, and a collecting institution isn’t always beholden to acknowledge that change. If the artwork’s been collected, been discussed, and that’s what the audience wants to see, then that’s usually how the museum will continue to show it. So while artists’ accounts can provide incredible insight into a given work, it’s not just a matter of doing an interview and leaving it at that. It really does require a connoisseurship of the actual materials and gaining a firsthand understanding of how different technologies affect the work.
That makes sense. It’s like you were saying a moment ago: an artwork’s longevity depends on its being able to sustain different readings over time, and that process really isn’t about authorial intent or prescribed readings – it’s about how people from different places and eras relate to what they’re experiencing. The work itself is the only constant in the equation, so it really becomes more a question of presentation.
   Definitely, although I do think there’s an important distinction to be made between works of art and design objects. Like, as one of my projects here at MoMA, we’re in the process of setting up the lab for a case study comparing authentic environments and emulations for video games, running the two side by side and doing careful analysis. That’s a very particular process when it comes to video games – in the case of the Atari 2600, for instance, we’re dumping cartridges into an Atari emulator called Stella (http://stella.sourceforge.net/), which is great, since Stella is capable of CRT [Cathode Ray Tube] emulation. But the problem we’re addressing here, in part, is that while there are a lot of people doing research on CRT emulation, they’re usually not doing it in an evidence-based, scientific manner. They’re more often doing things from memory, where they have a sense of what CRT looks like and say, “Yeah, that looks about right.” They’re not actually looking at a specific CRT from an arcade machine, or in the case of Atari, a home television set from that period. So to work with the 2600, we’ve actually found a consumer-grade television from 1978 and use that in our process, which allows us to maintain a sense of accuracy in the display. That’s really what makes DRMC so useful: because we’re relying on firsthand experience of the original materials, we can provide exacting documentation, down to the very shape of the pixels and size of the scan lines, so someone trying to install Atari 40 years from now will have a very accurate sense of what one experienced when playing (?v=pvjajVf3BEc) Yars’ Revenge in 1983.
Are these ideas reflected in how the institution chooses to exhibit the items publicly?
   Absolutely. Particularly with video games, we’re trying to present it in a way that allows the audience to consider it as a design object, as interface design, rather than have them turn into 10-year-old boys and play around. So our strategy has been to strip out the hardware, to display them devoid of any huge machinery, but still retain an accurate rendering. It’s just screens behind walls, with very minimal protruding elements that hold the controllers. Another big consideration is whether people should be viewing the game or actually playing it. Like with Sim City 2000, we’d show documentary videos of various cities that people have built, simply because that’s much more illustrative of what the game entails than leaving people to click around aimlessly for a few minutes, which is the most that a person tends to spend on a work in a gallery setting. I think we tend to have a lot more freedom in these considerations when working with a design object as opposed to a work of art – both may have specific ways they’re meant to be experienced, but the ethics of taking creative license in how a design object can or should be seen is usually much more open for interpretation.
So what’s been the nature of your involvement with XFR STN?
   Johanna (Burton, New Museum’s director and curator of education and public programs) brought me on board in the planning stages. I think that even then, she had an inkling of how complex the show was going to be, and was looking for people to help in answering any number of questions: How many appointments do we have in a day? How long should they be? What size files are we producing? What’s the turnaround time? How are we getting this content over to the Internet Archive? The show is really one complex system of active parts relying on one another, so Johanna ended up putting together a team of people from across the museum to help out, including me.
   I was brought in specifically for my expertise in born-digital materials. So I designed and set up that particular station, trained the technicians, and generally helped with the systems design side of things, figuring out how everything could work together flawlessly. That process took place over two months or so. Once the show launched, though, my involvement has been fairly minimal. I’m really only needed on the floor when something tricky comes up, like if something breaks in the born-digital setup, or there’s a problem with the web app for I built for cataloging, or simply if we’re preparing to dump the database, which is a large undertaking, seeing as we’ve already captured around 260 items so far.
Q^ Have any of the artist submissions stood out to you, whether for the complexity of the archiving process or the nature of their content?
   In terms of the preservation and capture process, nothing really stands out, just because this project was purposefully limited in its scope: we’ve been very specific about the formats we work with, very clear that we can’t work with anything else, so there really haven’t been any red herrings in the system. But on the content side of things, there have definitely been a lot of outstanding submissions. For instance, a lot of what these artists are digitizing are home videos from downtown New York in the 70s and 80s, with footage documenting gallery openings and what not, and a lot of random people pop up in those. It was also really interesting to work on this piece by Dov Jacobson called “Human Vectors,” which is a very early 3-D modeling animation done on a Vectrex computer. Actually, there’s an upcoming born-digital appointment that I’m really looking forward to, which is with Phil Sanders. He ran a gallery in the East Village in the 80s called RYO, which was devoted to computer artists, and he’ll be bringing in a lot of his own materials and documentation from that time. Nobody’s really heard of that gallery – it’s sort of an untold piece of history in terms of art and technology – so we’re really excited about it.
Q^ All told, I guess what strikes me most about the exhibition is that, as with ArtBase, it seems like you view it less as an artistic offering than as a necessary, even urgent public service.
   Yes. There’s absolutely a sense of urgency – and it’s not even a sense of urgency; it’s a simple, realistic fact that a lot of the work we’re doing won’t be possible in even a few years. That’s really for two reasons: first, there’s the removable media, which is aging, deteriorating over time. But you also need the hardware that allows you to read that material, which has the same issues of longevity and is often both rare and expensive. It’s almost impossible for these tasks to be undertaken by the artists themselves. So there’s definitely a bottom line with what we’re doing, which is that if we don’t make efforts to recover and document these materials today, we won’t have the chance to do so later.
Tags:
  • Ben Fino-Radin
  • Digital
  • Art
  • ArtBase
  • Archives
  • Rhizomes
  • Christopher Schreck
  • Saul Metnick
  • Ports Bishop
  • Takeshi Fukunaga
  • Kana Ariyoshi

02.18.2014

Team Periscope is traveling in
PRE-ELECTION AMERICA

SOCIAL
  •    RSS Feed

TWITTER TIME LINE