- Nonlinear circuits and systems
- analysis and desing of nonlinear circuits (and particularly of
discrete time ones) for the synthesis of signals with pre-assigned
statistical features
- Sensors and signal processing for fluid dynamics
appliacations
- novel mehodologies for the estimation of flight attitude angles
and air speed
- indirect approaches for the measurement of air data, exploiting
"on-skin" measurements
- sensor networks
- capacitive sensors
- Pulse codings for actuation, signal synthesis, audio
amplification and power conversion
- computationally efficient codings delivering energy friendly
power management, low distortion and low electromagnetic emission
levels
- True-Random Number Generators
- design of generators for true-random sequences based on chaotic
dynamics and on the reuse of ADC analog modules
- Oscillation Based Test e Complex Oscillation Based Test:
- forcing of oscillatory regimes and specifically of complex ones
for the validation of analog and mixed mode circuits
The research of Sergio Callegari is aimed at various aspects
electronics and microelectronics and involves two lines of core
activities: the first related to the analysis and optimization
Non-linear systems, the second to sensors.
1 Analysis, synthesis, optimization of non-linear
systems
Since 1996, Sergio Callegari deals with non-linear circuits and
systems. Recently, the activity is particularly focused on complex
dynamic circuits and on the exploitation of non-linear features for
the optimization of engineering systems.
1.1 Pulse modulations and switching systems
This research line aims to develop innovative encoders for
pulse-based signals characterized by binary or anyway discrete
levels, with particular reference to waveform synthesis, to
actuation, and to audio amplification.
Traditionally, modulations have mostly been used in engineering
as a smart means to exploit transmission media. However, in more
general terms, they constitute a whole paradigm for the
representation of information where the informative content is
distributed over time (and possibly over space) into signals (or
signal vectors). In particular, pulse modulation (such as width,
position, density and frequency modulations — PWM, PPM, PDM and PFM
respectively) exploit this property to allow a signal characterized
by discrete values (and as such processable by digital means) to
exhibit analog properties, such as the ability to be directly
processed by filters and continuous physical plants.
This property is used with success in actuation. For instance,
switched regulators and amplifiers take advantage of pulse
modulations to control the operation of a power bridge that in
turns administers in a discontinuous and virtually lossless way the
energy delivery to a load, be it an electric car, a loudspeaker or
any other electrical apparatus. However, the possibilities offered
by pulse modulations are not generally exploited in full. In many
cases, the ability to meet power regulation targets is a side
effect of the properties of a standard modulator and not the result
of a deliberate coding strategy.
The objective of this research is to develop new generations of
encoders, similar in usage to traditional ones and mostly
compatible with them (so as to promote rapid acceptance), but based
on radically different operating principles. Rather than using
standard modulations, the encoders will be explicitly designed to
enhance specific performance indexes thanks to a wide application
of optimization techniques and to the identification of key
mathematical properties. In other terms, once an information
content to be represented is assigned together with a list of
constraints and a series of merit factors, the final decision on
how to encode the information will be produced through an explicit
attempt to optimize the merit factors themselves.
This research line has recently given rise to the “OpIMA”
project (Optimized Impulsive Modulations for Actuation), which is
funded under the Strategic Projects programme at the University of
Bologna and coordinated by the writer. The project obtained
excellent marks when considered for funding. For more information,
see
www.opima-project.it.
1.2 Non linear circuits characterized by complex
dynamics
Chaotic systems exhibit a borderline dynamic behaviour sharing
purely-deterministic and random-like features. Consequently, they
can either be modeled by classical means (as systems of
differential or difference equations, etc.) or in a statistical
way, like stochastic processes. It should be noted that despite the
theoretical possibility of using classical, deterministic models,
features such as aperiodic behavior and sensitivity to initial
conditions make it practically impossible to predict their future
behavior. In this sense chaotic systems are best assimilated to
random-like sources.
Since long ago, this apparent paradox has made chaotic systems a
subject of speculation and interest. However, the study of
engineering applications of chaos is only very recent and similarly
recent is the development of analysis and synthesis methodologies
for electronic circuits capable of exhibiting complex behavior. In
fact, at least initially, the engineering approach to chaos was
very much limited to the identification of strange behaviors and
the set up of means to avoid them. Only in recent times, the
possibility of fruitful applications was recognized, finally
boosting research in the explicit design of exploitable chaotic
models.
Currently, the areas where chaos is applied range from neural
networks to the design of associative memories, from spread
spectrum telecommunication systems to cryptography, from
watermarking to the reduction of electromagnetic interference, from
simulation to the synthesis of keys for secure authentication, and
so on. The research carried on by Sergio Callegari directly regards
some of these fields, while maintaining a strong view towards
implementation aspects.
The basis of his scientific activity consists in the usage of
mathematical tools derived from statistics and geared at the
simultaneous observation of a plurality of system trajectories.
This approach, that only recently received recognition in
engineering, enables a quantitative understanding of some chaotic
systems (particularly of discrete-time ones) as well as a deeper
characterization of their properties in comparison to what can be
achieved by observing individual orbits. Clearly, the interest is
directed both at the improvement of the available tools and at
their practical exploitation. In this regard, a significant choice
is to focus on those problems for which classical solutions already
exist, so that the advantages and disadvantages deriving from the
exploitation of chaos based techniques can be objectively assessed.
Particularly important are those applications where the merit
factors are already traditionally expressed in probabilistic terms
(for example, spread spectrum communications, electromagnetic
interference reduction, and so on).
Initially, scientific activity was mostly devoted to those
applications where chaotic sources can substitute for traditional
pseudo-random ones. For example, a noise source to be used in a
stochastic neuron model was developed as well as circuits capable
of producing chaotic binary sequences with reduced self-correlation
and good 0-1 balance. Also, circuits for the optimization of spread
spectrum communication systems were proposed.
More recent applications regard the design of chaotic circuits
based on general purpose field programmable devices or on standard
IC building blocks.
The study regarding the synthesis and the usage of spread
spectrum signals a required significant effort on theoretical
matters and opened new research lines, such as the already cited
optimization of pulse modulations, the generation of cryptographic
keys and the testing of analog building blocks by means of chaotic
excitations.
1.3 Non-conventional techniques for the generation of random
sequences and keys for cryptography and authentication
In the field of Information and Communication Technology, it has
recently been observed that chaotic dynamics may enable a
particularly effective design of true-random number generators
which are a fundamental primitive for cryptographic, authentication
and information security systems.
The synthesis of random binary sequences is inherent in:
algorithms such as the DSA; key generation procedures for
algorithms dedicated to public/symmetric-key cryptography; RSA
moduli; and in many secure communication schemes. The ability of
cryptographic techniques to resist attacks based on pattern search
critically depends on the quality and the unpredictability of the
random number generators being adopted. As a result, generators for
cryptographic applications need to meet much more stringent
requirements than those for other applications.
It is generally acknowledged that the true-random generators can
at best be approximated. An ideal source should be capable of
producing infinitely long sequences composed of bits that are fully
independent of each other, with property that restarting the source
never allows an already produced sequence to be re-delivered
(non-repeatability property). In practice, electronic random
generators fall into two categories. On the one hand there are the
so-called pseudo-RNGs, on the other the pysical-RNGs. Pseudo-RNGs
are in fact deterministic algorithms capable to expand an initial
seed into a long binary sequence. On the other hand, physical-RNGs
are devices that use micro-cosmic phenomena that are observable in
macroscopic terms and that are generally characterized as noise
(eg, quantum noise, intervals in the emission of radioactive decay
processes, thermal and shot noise in electronic circuits,
fluctuations in the frequency of oscillators, activity pattern of
activity of human operators, etc.). Clearly, pseudo-RNGs are those
most distant from ideal specifications: being based on finite
memory algorithms they show periodic behaviors and deliver
correlated samples. For the same reason they are fully repeatable.
The consequent possibility (that necessarily exists, at least
potentially) to recover information on the seed from the
observation of output sequences is obviously hardly desirable in
applications related to information security and cryptography.
However, their substantial advantage, lays in their algorithmic
nature that makes them easily implementable in digital circuits and
in software. Physical generators, on the other hand, are the best
at approximating ideal random sources. Unfortunately, they
generally require highly specialized circuits and a strong control
over environmental and operational conditions. This makes them
ill-adapted to embedding in electronic and information technology
systems. Nevertheless, the growing importance of applications
related to data security has recently pushed major companies to
adopt them in replacement for pseudo-RNGs in their hardware
platforms.
Obviously, it would be very desirable to introduce generators
capable of combining the benefits of physical sources with the
implementation ease of pseudo-RNGs. This requires inventing design
strategies enabling the reuse of standard circuit blocks that are
already present in a majority of electronic systems. Recently,
research in this direction has produced some relatively successful
designs based on the reuse of peripheral blocks from FPGAs.
However, these systems are generally characterized by very low
data-rates (in the order of tens kbit/s). This is invariably due to
the need to refer to physical noisy phenomena on which the designer
has little or no control at all.
A recent proposal is to exploit chaotic dynamics and statistical
techniques for introducing a new class of random generators. An
intuitive justification for this approach stems from the
consideration that many of micro-cosmic phenomena used in physical
generators could actually be modeled in a deterministic form and
that it is simply their extreme sensitivity to initial conditions
to make them unpredictable an almost random to the external
observer. From this premise, it is obvious that rather than
exploiting natural models that are difficult to control and handle
it would be more convenient to use simpler artificial ones.
Today, it is possible to rely on non-linear discrete-time models
showing sensitivity to initial conditions, capable of producing
complex behaviors and at the same time relatively well understood
from a mathematical point of view. A particularly interesting
aspect is that their applicability to random sequence generation
can be proved not just heuristically, but also formally. Thanks to
an intuition from Kalman, it is possible to study and implement
such models by means of Markov chains and methods from symbolic
dynamics. In addition, it has recently been shown that some of
these models can be implemented in hardware through the re-use of
A/D converter stages. This is a crucial factor from an applicative
point of view, because ADCs are among those analog blocks that have
seen better and more constant refinement and investment in the last
years. They are currently available as reusable cores and codified
as IP blocks. The possibility to derive high performance random
generators from such blocks represents a guarantee of success in
terms of cost reduction and embeddability.
1.4 COBT: Complex Oscillation Based Test
OBT (Oscillation Based Test) is an emerging technique for the
validation of analog and mixed-mode circuits. The idea is to drive
the block under test into a self-sustained oscillation mode capable
to make the block faults quite evident. The approach is very
attractive thanks to its extreme simplicity: first of all, it
provides an excitation to the circuit under test without the burden
of external signal sources; and secondly, it allows failures to be
detected from measurements taken on an extremely limited number of
nodes. Both things represents important advantages, particularly if
one considers the increasing difficulty in accessing internal nodes
in complex mixed-mode architectures. Furthermore, many functional
blocks can be made suitable for OBT with only a very limited number
of changes in their design.
Notwithstanding the above advantages, the OBT approach is facing
much criticism. The most frequent one is that the adopted
oscillation regime is almost invariably sinusoidal, so that the
circuit under test gets excited by a simple tone. It has repeatedly
been highlighted that such an excitement is incapable to reveal all
possible faults. In literature there are techniques to overcome
this problem by the sequential use of different oscillation
frequencies, but this hinders other benefits of the approach.
In this context, the research activity of Sergio Callegari led
to the introduction of complex OBT techniques, where the block
under test is forced into a chaotic oscillation regime where the
excitement is rich and able to reveal a wider range amount of
faults. The first presentation of the COBT concept took place in
May 2008 at the ISCAS international conference with regards to the
validation of A/D converters and received a considerable amount of
interest.
2 Sensors
Since 2002, Sergio Callegari has been working with the
Distributed Sensor Laboratory (LYRAS) of the Advanced Research
Center on Electronic Systems for Information and Communication
Technologies “Ercole de Castro” (ARCES). LYRAS is established at
the Second School of Engineering of the University of Bologna and
its activities are focused on sensors for fluid dynamics and
aerospace applications and on sensors for biological
applications.
2.1 Sensors for applications in fluid-dynamics
Knowledge of normal and tangential strains over a structure
immersed in a fluid is of primary importance in applications
related to mechanics, aeronautics and fluid dynamics. This
information can be gathered by practicing local measures on many
points ideally creating a mesh on the surface under test. To this
aim it is necessary to deploy a network of coordinated sensors
whose raw readings are jointly processed to infer the higher level
quantities such as strain gradients, lift, friction, detachment
points of the boundary layer, and so on. The approach requires
sensors that are small (to make local measurements), cheap (to
provide large replication factors), smart (to coordinate them) and
robust (to deploy them in harsh environments).
Given the above premses, the interest in defining new types of
sensors sharing properties from conventional macro-sensors and MEMS
micro-sensors is self evident. Similarly evident is the interest in
shifting the research focus from the single transducer to sensor
systems capable of managing a large number of probes.
2.2 Signal processing for data from fluid-dynamics sensor
systems
All aircrafts have air-data systems used to deliver input data
to the automatic flight control unit or to alert the pilot. Pieces
of information that are particularly important include the air
speed and the attitude angles, primarily the angle of attack and of
side-slip.
Traditionally, flight parameters are read by pieces of ad-hoc
instrumentation that include parts protruding outside the aircraft
silhouette such as pipes, pitot tubes, wingbooms, nosebooms, vanes,
flow deflectors, and so on. Sensor systems are therefore
characterized by a high degree of intrusivity. Moreover, pneumatic
links are often needed between the different elements of the
instrumentation or between the interior and the exterior of the
aircraft. This is particularly undesirable in the case of small
unmanned air vehicles (UAVs). Another feature of conventional
sensors is to aim at direct measurements, with a one-to-one
device-to-measure relationship. Such an approach is clearly
motivated by the cost of the instruments and their installation,
but is not without drawbacks.
These premises justify an interest in radically different
measurement approaches, strongly based on indirect measures and
exploiting a large number of redundant sensors. The idea is to get
all the flight parameters from a set homogeneous readings variously
related to them, estimating and decoupling them by signal
processing techniques.
Particularly, it would be desirable to deploy arrays of pressure
or flow sensors directly placed on the very aerodynamic surfaces of
the aircraft (for example on the wings) and to indirectly,
simultaneously read from them both the average air speed and the
angle of attack. The possibility of using measures of pressure to
infer flight parameters by means of estimation techniques was
experimented in the past (eg, as part of the work by Whitmore at
the NASA labs). However, the experiments reported in the literature
typically use conventional transducers. For example, the NASA
experiments relied on orifices opened on the fuselage of the
airplane and pneumatically linked to standard pressure sensors.
This does not allow high levels of redundancy to be achieved. On
the contrary, the distributed usage of low cost surface sensors
would allow a much larger amount of information to be collected.
With this, it may becomes possible to relax the accuracy
requirements placed on the individual sensors and at the same time
to improve the ability to decouple physical effects and the
resilience to faults.
Presently, research has already produced some early prototypes
of low-cost sensors produced by Printed-Circuit-Board (PCB)
technologies and suitable for the above mentioned distributed
applications. Moreover, by fluid dynamic simulation, the
feasibility of the measurement approach has been verified. It has
been shown that by increasing the level of redundancy there is a
significant relaxation in the accuracy requirements placed on the
individual sensing units at no loss in the overall accuracy.
Research is currently focused on algorithmic aspects, model
identification and management of computational load. There is also
a plan for experimentally validating the approach in a wind tunnel
and for checking the feasibility of techniques for fault detection
and fault tolerant operation.