841 Works

Creating Value and Impact in the Digital Age Through Translational Humanities

Abby Smith Rumsey
Following the completion in July 2011 of our last planned summer session, SCI entered a new phase of work (1 January 2012 to 31 August 2013) focusing on the following program areas: • Scholarly Production • Graduate Education • The Value of the Humanities in the Digital Age SCI undertook concentrated work in these three areas, with continued generous support from The Andrew W. Mellon Foundation. Our goals for this period included fostering further development...

When Do Match-compilation Heuristics Matter?

Kevin Scott & Norman Ramsey
Modern, statically typed, functional languages define functions by pattern matching. Although pattern matching is defined in terms of sequential checking of a value against one pattern after another, real implementations translate patterns into automata that can test a value against many patterns at once. Decision trees are popular automata. The cost of using a decision tree is related to its size and shape. The only method guaranteed to produce decision trees of minimum cost requires...

Alloyed Branch History: Combining Global and Local Branch History for Robust Performance

Zhijian Lu, John Lach, Mircea Stan & Kevin Skadron
This paper introduces “alloyed” prediction, a new two-level predictor organization that combines global and local history in the same structure, combining the advantages of two-level predictors and hybrid predictors. The alloyed organization is motivated by measurements showing that “wrong-history mispredictions” are even more important than conflict-induced mispredictions. Wrong-history mispredictions arise because current two-level, history-based predictors provide only global or only local history. The contribution of wrong-history to the overall misprediction rate is substantial because most...

Perceptually Driven Simplification Using Gaze-Directed Rendering

David Luebke, Benjamin Hallen, Dale Newfield & Benjamin Watson
We present a unique polygonal simplification method grounded in rigorous perceptual science. Local simplification operations are driven directly by perceptual metrics, rather than the geometric metrics common to other algorithms. The effect of each operation on the final image is considered in terms of the contrast the operation will induce in the image and the spatial frequency of the resulting change. Equations derived from the psychophysical studies determine whether the simplification operation will be perceptible;...

A Calculus for End-to-end Statistical Service Guarantees

Jorg Liebeherr, Stephen Patek & Almut Burchard
The deterministic network calculus offers an elegant framework for determining delays and backlog in a network with deterministic service guarantees to individual traffic flows. A drawback of the deterministic network calculus is that it only provides worst-case bounds. Here we present a network calculus for statistical service guarantees, which can exploit the statistical multiplexing gain of sources. We introduce the notion of an effective service curve as a probabilistic bound on the service received by...

FLUTE: A Flexible Real-Time Data Management Architecture for Performance Guarantees

K Kang, S Son, J Stankovic & T Abdelzaher
Efficient real-time data management has become increasingly important as real-time applications become more sophisticated and data-intensive. In data-intensive real-time applications, e.g., online stock trading, agile manufacturing, sensor data fusion, and telecommunication network management, it is essential to execute transactions within their deadlines using fresh (temporally consistent) sensor data, which reflect the current real-world states. However, it is very challenging to meet this fundamental requirement due to potentially time-varying workloads and data access patterns in these...

Memory Reference Reuse Latency: Accelerated Sampled Microarchitecture Simulation

Jr Haskins & K Skadron
This paper explores techniques for speeding up sampled microprocessor simulations by exploiting the observation that of the memory references that precede a sample, references that occur nearest to the sample are more likely to be germane during the sample itself. This means that accurately warming up simulated cache and branch predictor state only requires that a subset of the memory references and control-flow instructions immediately preceding a simulation sample need to be modeled. Our technique...

Libra Electronic Theses and Dissertation (ETD) Deposits: 2015 Statistics

Ellen Ramsey & Sherry Lake
This document shows statistics tracking immediate worldwide availability and embargo choices of doctoral and masters' students depositing theses and dissertations into Libra, the University of Virginia's institutional repository, in 2015.

Evaluating Astronomical Institutional Productivity Using the Astrophysics Data System Database

Eric Schulman, James French & Allison Powell
We used the Astrophysics Data System (ADS) to measure the productivity of the 38 institutions studied by Abt during the period 1985 to 1994. The ADS database contains 84,822 astronomical papers published in Astronomy and Astrophysics, The Astronomical Journal, The Astrophysical Journal, Monthly Notices of the Royal Astronomical Society, and The Publications of the Astronomical Society of the Pacific during this period. For each of these papers we compared the affiliation of each author to...

A Survey of Configurable, Component-based Operating Systems for Embedded Applications

Luis Friderich, John Stankovic, Marty Humphrey, Michael Marley & John Haskins
With the proliferation of embedded applications, criteria such as cost-effective variations of the product, flexible operation of the product, minimal time to market, and minimal product costs become deep concerns for embedded software industries. Component-based software is becoming an increasingly popular technology as a means for the construction of complex software systems by assembling off-the-shelf building blocks providing the ability to deal with customization and reconfiguration issues. However, many of the component-based methodologies utilize large...

Simple Alternate Routing for Differentiated Services Networks

Stephen Patek, Raja Venkateswaran & Jorg Liebeherr
Recent work on differentiated services in the Internet has defined new notions of Quality of Service (QoS) that apply to aggregates of traffic in networks with coarse spatial granularity. Most proposals for differentiated services involve traffic control algorithms for aggregate service levels, packet marking and policing, and preferential treatment of marked packets in the network core. The issue of routing for enhancing aggregate QoS has not received a lot of attention. This study investigates the...

Caches As Filters: A Unifying Model for Memory Hierarchy Analysis

Dee Weikle, Kevin Skadron, Sally McKee & Wm Wulf
This paper outlines the new caches-as-filters framework for the analysis of caching systems, describing the functional filter model in detail. This model is more general than those introduced previously, allowing designers and compiler writers to understand why a cache exhibits a particular behavior, and in some cases indicating what compiler or hardware techniques must be employed to improve a cache hierarchy's performance. Three components of the framework, the trace-specification notation, equivalence class concept, and new...

RAP: A Real-Time Communication Architecture for Large-Scale Wireless Sensor Networks

Chenyang Lu, Brian Blum, Tarek Abdelzaher, John Stankovic & Tian He
Large-scale wireless sensor networks represent a new generation of real-time embedded systems with significantly different communication constraints from traditional networked systems. This paper presents RAP, a new real-time communication architecture for large-scale sensor networks. RAP provides convenient, high-level query and event services for distributed micro-sensing applications. Novel location-addressed communication models are supported by a scalable and light-weight network stack. We present and evaluate a new packet scheduling policy called velocity monotonic scheduling that inherently accounts...

The Structure and Value of Modularity in Software Design

Kevin Sullivan, William Griswold, Yuanfang Cai & Ben Hallen
The concept of information hiding modularity is a cornerstone of modern software design thought, but its formulation remains casual and its emphasis on changeability is imperfectly related to the goal of creating value in a given context. We need better models of the structure and value of information hiding, for both their explanatory power and prescriptive utility. We evaluate the potential of a new theory-developed to account for the influence of modularity on the evolution...

IT: Machine Independent Programming on Hierarchically Nested Machines

Muhammad Yanhaona & Andrew Grimshaw
Andrews in his "Concurrent Programming: Principles and Practice" expresses that "concurrent programs are to sequential programs what chess is to checkers". We believe people experienced in both kinds of programming will largely agree to his statements. But what makes parallel programming so difficult? Although there are differences of opinions, none blames the difficulty of devising a parallel algorithm, as opposed to a sequential algorithm, as the core problem. It is true that construing a parallel...

Solutions for Trust of Applications on Untrustworthy Systems

Jonathan Hill, jack Davidson & John Knight
Distributed systems rely on non-local applications. At the same time, non- local applications can only be trusted as far as a non-local systems can be trusted. This is inadequate for the purposes of monitoring and maintaining critical infrastructure that relies on a distributed computer system. We require a distributed, flexible, and reliable application system to act non- locally throughout a network. Flexibility encourages a model that utilizes application level processes, dispatched from a trusted source...

Software Design Spaces: Logical Modeling and Formal Dependence Analysis

Yuanfang Cai & Kevin Sullivan
We lack a useful, formal theory of modularity in abstract software design. A missing key is a framework for the ab- stract representation of software design spaces that sup- ports analysis of design decision coupling structures. We contribute such a framework. We represent design spaces as constraint networks and develop a concept of design de- cision coupling based on the minimal change sets of a vari- able. This work supports derivation, from logical models, of...

Power and Thermal Effects of SRAM vs. Latch-Mix Design Styles and Clock Gating Choices

Yingmin Li & Kevin Skadron
This paper studies the impact on energy efficiency and thermal behavior of design style and clock-gating style in queue and array structures. These structures are major sources of power dissipation, and both design styles and various clock gating schemes can be found in modern, high-performance proces- sors. Although some work in the circuits domain has explored these issues from a power perspective, thermal treatments are less common, and we are not aware of any work...

The Engineering Roles of Requirements and Specification

Elisabeth Strunk, Carlo Furia, Matteo Rossi, John Knight & Dino Mandrioli
The distinction between requirements and specification is often confused in practice. This obstructs the system validation process, because it is unclear what exactly should be validated, and against what it should be validated. The reference model of Gunter et al. addresses this difficulty by providing a framework within which requirements can be distinguished from specification. It separates world phenomena from machine phenomena. However, it does not explain how the characterization can be used to help...

A Practical Acoustic Localization Scheme for Outdoor Wireless Sensor Networks

Jingbin Zhang, Ting Yan, John Stankovic & Sang Son
Localization for outdoor wireless sensor networks has been a challenge for real applications. Although many solutions have been proposed, few of them can be used in real applications because of their high cost, low accuracy or infeasibility due to practical issues. In this paper, we propose a practical acoustic localization scheme called Thunder. Thunder employs an asymmetric architecture and shifts most of the complexities and hardware requirements from each node to a single powerful centralized...

Microarchitectural Floorplanning for Thermal Management: A Technical Report

Sivakumar Velusamy & Kevin Skadron
In current day microprocessors, exponentially increasing power densities, leakage, cooling costs, and reliability concerns have resulted in temperature becoming a first class design constraint like performance and power. Hence, virtually every high performance microprocessor uses a combination of an elaborate thermal package and some form of Dynamic Thermal Management (DTM) scheme that adaptively controls its temperature. While DTM schemes exploit the important variable of power density to control temperature, this paper attempts to show that...

Synchronization of Temporal Constructs in Distributed Multimedia Systems with Controlled Accuracy

Sang Son & Nipun Agarwal
With the inception of technology in communication networks such as ATM it will be pos- sible to run multimedia applications on future integrated networks. Synchronization of the related media data is one of the key characteristics of a multimedia system. In this paper we present a scheme for synchronization of multimedia data across a network where the accuracy of detecting asynchronization and predicting the future asynchrony is variable and can be tailored to the intended...

Availability and Latency of World Wide Web Information Servers

Charles Viles & James French
During a 90 day period in 1994, we measured the availability and connection latency of HTTP (hypertext transfer protocol) information servers. These measurements were made from a site in the Eastern United States. The list of servers included 189 servers from Europe and 324 servers from North America. Our measurements indicate that on average, 5.0% of North American servers and 5.4% of European servers were unavailable from the measurement site on any given day. As...

The NAS Parallel Benchmark Kernels in MPL

Adam Ferrari, Adrian FilipiMartin & Soumya Viswanathan
The Numerical Aerodynamic Simulation (NAS) Parallel Benchmarks are a set of algorithmically specified benchmarks indicative of the computation and communication needs of typical large-scale aerodynamics problems. Although a great deal of work has been done with respect to implementing the NAS Parallel Benchmark suite on high-end vector supercomputers, multiprocessors, and multicomputers, only recently has the possibility of running such demanding applications on workstation clusters begun to be explored. We implemented a subset of the NAS...

Interface Negotiation and Efficient Reuse: A Relaxed Theory of the Component Object Model

Kevin Sullivan & Mark Marchukov
Reconciling requirements for (1) the efficient integration of independently developed and evolving components and (2) the evolution of systems built from such components requires novel architectural styles, standards and idioms. Traditional object-oriented approaches have proven inadequate. Two important new mechanisms supporting integration and evolution are dynamic interface negotiation and aggregation, an approach to efficient composition. Both feature prominently in the Component Object Model (COM), a de facto standard providing the architectural foundation for many important...

Registration Year

  • 2017
    841

Resource Types

  • Report
    841

Affiliations

  • University of Virginia
    24
  • Bonn-Rhein-Sieg University of Applied Sciences
    3
  • University of Bonn
    1