Computer Systems Research Day
September 4th, 2008.
Location: Room 308, Huxley Building, Exhibition Road, South Kensington, Imperial College London
Invited talks are 25 minutes, with 15 minutes for audience questions and discussions.
If you would like to attend, please send an email to Qiang Wu with qiangwu AT doc.ic.ac.uk and cc: oskar AT doc.ic.ac.uk, subject: research day
Prof Alan Mycroft
Dr Christoph Hagleitner
, IBM Research, Zurich
Prof Jinian Bian
, Tsinghua University
Dr Benedict R. Gaster, Advanced Micro Devices
Prof Tsahi Birk
Prof Helmut Jakubowicz
, Imperial College
Prof John V McCanny
, CBE FRS FREng IEEE Fellow FIAE, Queen's University Belfast
Dr Pablo Molinero Fernandez
Programming Languages and Hardware Evolution
Most programming languages are oriented around the assumption of
an underlying von-Neumann model. We examine recent trends in
hardware design, including multi-core, and show how exploiting
these requires changes to programming languages.
Alan Mycroft is Professor in Computing at Cambridge.
He has a BA in Mathematics (Cambridge) and a PhD in
Computer Science (Edinburgh), and worked in Edinburgh and Chalmers
(Gothenburg) before taking up the post in Cambridge.
His research has concentrated on the interplay of theory
Historically the scaling of processor performance was based on a
simultaneous increase of clock-frequency and areal density for each
technology generation. While the scaling laws for areal density still
apply, the scaling of the clock-frequency has almost stopped. The
current trend is to scale the number of cores on a processor but
recently systems including specialized accelerator cores have
attracted a lot of attention because of their performance and power
advantages. In this talk I will describe an architecture for
accelerators based on our BFSM concept. Application examples include
pattern matching engines as well as an advanced header parser.
Dr. Christoph Hagleitner received his Ph.D. in electrical engineering
from ETH Zurich with a thesis on a CMOS single-chip gas detection
system. He then headed the circuit-design group of the Physical
Electronics Laboratory. During his thesis work, he specialized in
interface circuitry and system aspects of CMOS integrated micro- and
nanosystems. In 2003 he joined the IBM Research Laboratory in Zurich
to work on the analog front-end and mixed-signal design for a novel
probe-storage device. Since December 2007 he is heading the
accelerator technologies group at the IBM Research Laboratory in
Zurich. Dr. Hagleitner is the author of more than 40 papers in
scientific journals and conference proceedings.
SAT for Formal Verification, from Gate Level to High Level
SAT is widely applied in electronic design automation (EDA), especially formal verification, and other fields. SAT solving techniques have been improved rapidly in the last decade. Boolean SAT has been applied in gate-level formal verification successfully. At gate level, SAT solvers can be divided into CNF-based SAT solvers, circuit-based SAT solvers and the combined solvers. They performed well in model checking, especially Bounded Model Checking (BMC) and Unbounded Model Checking (UMC).
Recently, for the formal verification in Register Transfer Level (RTL), the hybrid SAT problem, which contains not only Boolean but also word variables, has received more and more attention. Some efficient algorithms to solve the hybrid SAT problem are presented, which use complete hybrid branch-and-bound strategy with conflict-driven learning.
Satisfiability Modulo Theories (SMT) is considered as the second generation of verification engines comparing to the first generation with BDD and SAT. It is applied in high level circuit verification combining with hypergraph partitioning to improve the solving efficiency.
Now Transaction Level Modeling (TLM) is a new trend in EDA. TLM 2.0 was released on DAC 2008. Can SAT be extended to TLM? Some possible solutions will be shown in the talk.
Jinian Bian is a professor of Computer Science and Technology at Tsinghua University, Beijing, China. He graduated from Tsinghua University in 1970, then he joined Tsinghua University, and has been a full professor from 1999. He had been a visiting scholar of Kyoto University, Japan, from 1985 to 1986. His interests are electronic design automation, including register transfer level and electronic system level aware design verification and test, high-level and logic level synthesis, as well as high-level aware floorplanning and placement. He has conducted and jointly conducted a number of key projects on the design automation for integrated circuits supported by the Chinese government. He is a TPC member of several international conferences, such as FPL, FPT, ISCAS, ASP-DAC and ATS.
Benedict R. Gaster and Jayanth Gummaraju
Executing General purpose GPU kernels on the CPU
There are numerous examples of parallel programming models for both
multi-core CPU and the massively data-parallel GPU. Still today it is not
possible to efficiently execute code intended for the CPU on the GPU or
In this presentation we outline some possible approaches for efficient
execution of GPU compute shaders on multi-core CPU.
Benedict R. Gaster works for Advanced Micro Devices, Santa Clara
California, in the Computer Graphics Group, where he is the architectural
lead for the next generation of compilers for General purpose computing on
the GPU. Before his current position, he was the lead for ClearSpeed's Cn
compiler targeting the CSX family of micro-processors, based in Bristol UK.
and He received his Ph.D degree in computer science from Nottingham
University in 1998.
On Spending Storage Space and Exploiting Stored Data in order to Mitigate Storage and Communication Performance Bottlenecks
Storage-related communication (web browsing, transmission of stored video and images, remote backup, etc.) constitutes a major fraction of Internet traffic. While communication bandwidth and the storage capacity of magnetic disk drives have been growing very rapidly, disk transfer rates have not kept up and disk access time has hardly changed. In this talk, we show how to spend the abundant resource, namely storage space, and even exploit stored data in order to reduce disk-access and/or communication requirements, thereby increasing achievable system performance. This is briefly demonstrated in several diverse settings.
Yitzhak (Tsahi) Birk is an associate professor in the Electrical Engineering Department at the Technion, and heads its Parallel Systems Laboratory. He received his B.Sc. (cum laude) and M.Sc. from the Technion, and a Ph.D from Stanford University, all in electrical engineering. From 1986 to 1991, he was a Research Staff Member at IBM's Almaden Research Center.
Prof. Birk's research interests include computer and communication systems. He is particularly interested in parallel and distributed architectures for information systems, including communication-intensive storage systems, with special attention to the true application requirements in each case. The judicious exploitation of redundancy for performance enhancement in these contexts has been the subject of much of his recent work. He is also engaged in research into various facets of processor architecture, attempting "cross fertilization" between his various areas of research.
Computing Requirements for Seismic Data Processing
Seismic data processing has a long history of requiring the most
powerful and advanced computer technology. Indeed, many advances in
computer technology (including integrated circuits and array processors)
have either been developed in response to, or arisen from, requirements
for the geophysical industry. Currently, geophysics remains one of the
largest users of HPC systems, with 49 of the Top 500 Supercomputers
dedicated to geophysical applications. Furthermore, several systems used
by seismic contractors, but which are not included in the Top 500 list,
already operate at speeds in excess of a petaflop. In this talk we will
review the state-of-the-art in seismic computing, together with the
hardware and software that are likely to be required by the industry in
the next few years. Based on both past experience and algorithm
requirements, we will show that seismic processing should continue to
offer some of the biggest computational challenges for many more years
Helmut Jakubowicz is the PGS Professor of Petroleum Geophysics at
Imperial College London. Prior to his appointment in 2007, he worked for
twentyseven years within the seismic industry, and was active both in
research and field operations. His research interests include seismic
data acquisition and processing.
The Institute of Electronics, Communications and Information Technology - Creating Wealth through Research and Innovation
An overview will be given of the activities of the Institute of
Electronics Communications and Information Technology (ECIT) at
Queen's University Belfast
. ECIT which opened in
2004 is a specially designed 4000m2 building, located off-campus and
is the University's research flagship on the Northern Ireland Science
. ECIT has brought together over 130 research
specialists in complementary fields of Electronics and Computer
Science and has now established extensive global industrial and
university research connections and collaborations. ECIT undertakes
ambitious, real-world "mission orientated" with teams that bring
together complementary strengths and expertise. A number of examples
of such programmes will be outlined. An important role is to work with
companies, share longer term industrial and technology road-maps and
to engage in "over-the-horizon" research in selected areas that are
aligned with ECIT's expertise. This is done in an environment that
has been designed to foster innovation and commercialisation that
directly impacts the wider economy. When the ECIT project was first
announced (2003) the Science Park was a derelict site (originally part
of the former Harland and Wolfe Shipyard). Today 1500 people are
employed generating around £50M p.a. in salaries alone. To date, three
internally created spin-off companies have been created with a further
seventeen externally created early stage ICT based companies
("spin-ins") being located within the building. The talk will given a
summary of the Institute's main activities, how these differ form a
more conventional university environment and highlight important
aspects that foster strong links between academic research, innovation
and wealth creation. It will also summarise important plans for the
Professor John McCanny is Head of the School of Electronics,
Electrical Engineering and Computer Science. Director of the Institute
of Electronics, Communications and Information Technology (ECIT).
Professor John McCanny is an international authority in the design of
silicon integrated circuits for Digital Signal Processing; having made
many pioneering contributions to this field.He has published over 300
major journal and conference papers, holds 25 patents and has
published 5 research books.He is an IEEE Fellow, a Fellow of the Royal
Society, the Royal Academy of Engineering, the Irish Academy of
Engineering, the Institution of Engineering and Technology (formerly
IEE), Engineers Ireland and the Institute of Physics.He is also a
Member of the Royal Irish Academy and the European Academy of
Professor McCanny has won numerous awards.These include a Royal
Academy of Engineering Silver Medal for outstanding contributions to
UK engineering leading to commercial exploitation (1996), an IEEE 3rd
Millennium medal, the Royal Dublin Society/Irish Times Boyle Medal
(2003) and the Institution of Engineering and Technology's (formerly
IEE) Faraday Medal (2006).In 2002 he was awarded a CBE for his
contributions to "engineering and higher education".
He holds a Bachelor's degree in Physics from the University of
Manchester, a PhD in Solid State Physics from the University of Ulster
and a DSc in Electronics Engineering from Queen's University Belfast.
Will Mobile Broadband lead to an intelligent network or to another stupid one?
Mobile Broadband (MBB) is posing serious challenges to mobile
operators around the world due to its market success. Operators offer
a flat price for their MBB plans that limits the data Average Revenue
per User (ARPU) they can get. At the same time, a small amount of
users and applications dominate the traffic, forcing operators to lose
control over how and when their network should be upgraded. Operators
are adding fair use clauses to allow them to take limited action
against the heaviest users. Operators are also requesting more
intelligence in the network to: 1) understand subscriber behavior, 2)
manage traffic so that they can control their CAPEX spending, 3)
provide advanced services such as centralized charging, network
security, differentiated QoS or parental control.
This new form of network intelligence requires a lot of stream
processing. Operators look for more intelligent routers that do much
more than plain forwarding of packets, and that are flexible enough as
to adapt to the ever-changing traffic landscape of the Internet. The
network also needs to know more about the subscriber originating or
receiving the traffic, as to adapt its behaviour to the subscriber's
In the past fixed broadband ISPs have tried this approach with mixed
results. Will mobile operators fully succeed this time? Even if they
do, will network neutrality legislation limit the level of
intelligence in the network?
Pablo Molinero, Ph.D., is the Strategic Product Manager for the
Service Aware Support Node (SASN) at Ericsson.
He is responsible for:
(§) securing that Ericsson's competitive and world leading
position in service awareness and traffic inspection in both mobile
and fixed markets;
(§) supporting the market units and channels with strategies,
roadmap information, feedback collection and specific customer cases;
(§) the product profitability management, life cycle management,
product strategies, roadmaps, and management of the R&D budget of the
Service Aware Support Node (SASN) product.
SASN is an access-agnostic node used by mobile and fixed operators.
SASN captures user IP traffic, analyzes its content and classifies it
into separate service categories. After the classification, each
service session is charged, metered, controlled and shaped according
to user-specific policies. The node provides business intelligence and
enables advanced charging and control models for operators. Ericsson
is the technical and market leader for service differentiation in
Pablo holds a Ph.D. and M.Sci. degrees in Electrical Engineering from
Stanford University. He also holds Eng. Degrees in Telecommunications
Engineering from both Universidad Politecnica Madrid and from Ecole
Nationale Superieure des Telecommunications, Paris. Finally, he also
has a M.Sci. In Physics from UNED, Spain.