C++ DATA STUCTURE
Friday, 18 July 2014
How can i earn money onlline without investment at Home, office, school,college,universities,house wives, oldage,unemployed , ?
Labels:
at,
college,
Home,
house wives,
How can i earn,
investment,
money,
office,
oldage,
onlline,
school,
unemployed,
universities,
without
Saturday, 28 June 2014
WHAT IS OBJECT ORIENTED PROGRAMMING(OOP) HISTORY and c++ working ?
Introduction to Object-Oriented Programming
• Objects and classes
• Abstract Data Types (ADT)
• Encapsulation and information hiding
• Aggregation
• Inheritance and polymorphism
Sony Computer Entertainment Europe Research & Development Division Pitfalls of Object Oriented Programming
Slide 2
•A quick look at Object Oriented (OO) programming
•A common example
•Optimisation of that example
•Summary
What I will be covering
Slide 3
•What is OO programming?
–a programming paradigm that uses "objects" –data structures consisting of datafieldsand methods together with their interactions –to design applications and computer programs.(Wikipedia)
•Includes features such as
–Data abstraction
–Encapsulation
–Polymorphism
–Inheritance
Object Oriented (OO) Programming
Slide 4
•OO programming allows you to think about problems in terms of objects and their interactions.
•Each object is (ideally) self contained
–Contains its own code and data.
–Defines an interface to its code and data.
•Each object can be perceived as a „black box‟.
What’s OOP for?
Slide 5
•If objects are self contained then they can be
–Reused.
–Maintained without side effects.
–Used without understanding internal implementation/representation.
•This is good, yes?
Objects
Slide 6
•Well, yes
•And no.
•First some history.
Are Objects Good?
Slide 7
A Brief History of C++
C++ development started
1979
2009
Slide 8
A Brief History of C++Named “C++”
1979
2009
1983
Slide 9A Brief History of C++
First Commercial release
1979
20091985
Slide 10
A Brief History of C++
Release of v2.0
1979
2009
1989
Slide 11
A Brief History of C++Release of v2.0
1979
2009
1989Added
•multiple inheritance,
•abstract classes,
•static member functions,
•const member functions
•protected members.
Slide 12
A Brief History of C++Standardised
1979
2009
1998
Slide 13
A Brief History of C++
Updated
1979
2009
2003
Slide 14
A Brief History of C++C++0x
1979
2009
?
Slide 15
•Many more features have been added to C++
•CPUs have become much faster.
•Transition to multiple cores
•Memory has become faster.
So what has changed since 1979?
http://www.vintagecomputing.com
Slide 16
CPU performance
Computer architecture: a quantitative approach
By John L. Hennessy, David A. Patterson, Andrea C. Arpaci-Dusseau
Slide 17
CPU/Memory performance
Computer architecture: a quantitative approach
By John L. Hennessy, David A. Patterson, Andrea C. Arpaci-Dusseau
Slide 18
•One of the biggest changes is that memory access speeds are far slower (relatively)
–1980: RAM latency ~ 1 cycle
–2009: RAM latency ~ 400+ cycles
•What can you do in 400 cycles?
What has changed since 1979?
Slide 19
•OO classes encapsulate code and data.
•So, an instantiated object will generally contain all data associated with it.
What has this to do with OO?
Slide 20
•With modern HW (particularly consoles), excessive encapsulation is BAD.
•Data flow should be fundamental to your design (Data Oriented Design)
My Claim
Slide 21
•Base Object class
–Contains general data
•Node
–Container class
•Modifier
–Updates transforms
•Drawable/Cube
–Renders objects
Consider a simpleOO Scene Tree
Slide 22
•Each object
–Maintains bounding sphere for culling
–Has transform (local and world)
–Dirty flag (optimisation)
–Pointer to Parent
Object
Slide 23
Objects
Class Definition
Memory Layout
Each square is 4 bytes
Slide 24
•Each Node is an object, plus
–Has a container of other objects
–Has a visibility flag.
Nodes
Slide 25NodesClass DefinitionMemory Layout
Slide 26Consider the following code…
•Update the world transform and world space bounding spherefor each object.
Slide 27
Consider the following code…
•Leaf nodes (objects)return transformed bounding spheres
Slide 28
Consider the following code…
•Leaf nodes (objects)return transformed bounding spheres
What‟s wrongwith this code?
Slide 29Consider the following code…
•Leaf nodes (objects)return transformed bounding spheres
If m_Dirty=falsethen we get branch mispredictionwhich costs 23 or24 cycles.
Slide 30Consider the following code…
•Leaf nodes (objects)return transformed bounding spheres
Calculationof the world bounding sphere takes only 12 cycles.
Slide 31
Consider the following code…
•Leaf nodes (objects)return transformed bounding spheres
So using a dirty flag here is actually slower than not using one(in the case where it is false)
Slide 32Lets illustrate cache usage
Main Memory
L2 Cache
Each cache line is 128 bytes
Slide 33
Cache usageMain Memory
parentTransformis already in the cache (somewhere)
L2 Cache
Slide 34Cache usageMain Memory
Assume thisis a 128byte boundary (start of cacheline)
L2 Cache
Slide 35Cache usage
Main Memory
Load m_Transforminto cache
L2 Cache
Slide 36Cache usage
Main Memory
m_WorldTransformis stored via cache (write-back)L2 Cache
Slide 37
Cache usage
Main Memory
Next it loads m_Objects
L2 Cache
Slide 38
Cache usage
Main Memory
Then a pointer is pulled from somewhere else (Memory managed by std::vector)
L2 Cache
Slide 39
Cache usage
Main Memory
vtblptrloaded into Cache
L2 Cache
Slide 40
Cache usage
Main Memory
Look up virtual function
L2 Cache
Slide 41
Cache usage
Main Memory
Then branch to that code
(load in instructions)
L2 Cache
Slide 42
Cache usage
Main Memory
New code checks dirty flag then sets world bounding sphere
L2 Cache
Slide 43
Cache usage
Main Memory
Node‟s World Bounding Sphere is then Expanded
L2 Cache
Slide 44
Cache usage
Main Memory
Then the next Object is processed
L2 Cache
Slide 45
Cache usage
Main Memory
First object costs at least 7 cache misses
L2 Cache
Slide 46
Cache usage
Main Memory
Subsequent objects cost at least 2 cache misses each
L2 Cache
Slide 47
•11,111 nodes/objects in a tree 5 levels deep
•Every node being transformed
•Hierarchical culling of tree
•Render method is empty
The Test
Slide 48
Performance
This is the time taken just to traverse the tree!
Slide 49
Why is it so slow?
~22ms
Slide 50
Look at GetWorldBoundingSphere()
Slide 51
Samples can be a little misleading at the source code level
Slide 52
if(!m_Dirty) comparison
Slide 53
Stalls due to the load 2 instructions earlier
Slide 54
Similarly with the matrix multiply
Slide 55Some rough calculations
Branch Mispredictions: 50,421 @ 23 cycles each ~= 0.36ms
Slide 56
Some rough calculations
Branch Mispredictions: 50,421 @ 23 cycles each ~= 0.36ms
L2 Cache misses: 36,345 @ 400 cycles each ~= 4.54ms
Slide 57
•From Tuner, ~ 3 L2 cache misses per object
–These cache misses are mostly sequential (more than 1 fetch from main memory can happen at once)
–Code/cache miss/code/cache miss/code…
Slide 58
•How can we fix it?
•And still keep the same functionality and interface?
Slow memory is the problem here
Slide 59
•Use homogenous, sequential sets of data
The first step
Slide 60
Homogeneous Sequential Data
Slide 61
•Use custom allocators
–Minimal impact on existing code
•Allocate contiguous
–Nodes
–Matrices
–Bounding spheres
Generating Contiguous Data
Slide 62
Performance
19.6ms -> 12.9ms
35% faster just by movingthings around in memory!
Slide 63
•Process data in order
•Use implicit structure for hierarchy
–Minimise to and fro from nodes.
•Group logic to optimally use what is already in cache.
•Remove regularly called virtuals.
What next?
Slide 64
Hierarchy
Node
We start with a parent Node
Slide 65Hierarchy
Node
Node
Node
Which has children nodes
Slide 66
Hierarchy
Node
Node
Node
And they have a parent
Slide 67Hierarchy
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
And they have children
Slide 68
Hierarchy
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
And they all have parents
Slide 69
Hierarchy
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
A lot of this information can be inferred
Node
Node
Node
Node
Node
Node
Slide 70
Hierarchy
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Use a setof arrays, one per hierarchy level
Slide 71
Hierarchy
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Parent has 2 children
:2
:4
:4
Children have 4 children
Slide 72
Hierarchy
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Ensure nodes and their data are contiguous in memory
:2
:4
:4
Slide 73
•Make the processing global rather than local
–Pull the updates out of the objects.
•No more virtuals
–Easier to understand too –all code in one place.
Slide 74
•OO version
–Update transform top down and expand WBS bottom up
Need to change some things…
Slide 75
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Update transform
Slide 76
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Update transform
Slide 77
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Update transform and world bounding sphere
Slide 78
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Add bounding sphere of child
Slide 79
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Update transform and world bounding sphere
Slide 80
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Add bounding sphere of child
Slide 81
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Update transform and world bounding sphere
Slide 82
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Add bounding sphere of child
Slide 83
•Hierarchical bounding spheres pass info up
•Transforms cascade down
•Data use and code is „striped‟.
–Processing is alternating
Slide 84
•To do this with a „flat‟ hierarchy, break it into 2 passes
–Update the transforms and bounding spheres(from top down)
–Expand bounding spheres (bottom up)
Conversion to linear
Slide 85
Transform and BS updates
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
For each node at each level (top down)
{
multiply world transform by parent‟s
transform wbsby world transform
}:2
:4
:4
Slide 86
Update bounding sphere hierarchies
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
For each node at each level (bottom up)
{
add wbsto parent‟s
}
:2
:4
:4
cull wbsagainst frustum
}
Slide 87
Update Transform and Bounding Sphere
How many children nodes to process
Slide 88
Update Transform and Bounding Sphere
For each child, update transformand bounding sphere
Slide 89
Update Transform and Bounding Sphere
Note the contiguousarrays
Slide 90
So, what’s happening in the cache?
Parent
Children
Parent Data
Childrens‟ Data
Children‟s node data not needed
Unified L2 Cache
Slide 91
Load parent and its transform
Parent
Parent Data
Childrens‟ Data
Unified L2 Cache
Slide 92
Load child transform and set world transform
Parent
Parent Data
Childrens‟ Data
Unified L2 Cache
Slide 93
Load child BS and set WBS
Parent
Parent Data
Childrens‟ Data
Unified L2 Cache
Slide 94
Load child BS and set WBS
Parent
Parent Data
Childrens‟ Data
Next child is calculated with no extra cache misses !
Unified L2 Cache
Slide 95
Load child BS and set WBS
Parent
Parent Data
Childrens‟ Data
The next 2 children incur 2 cache misses in total
Unified L2 Cache
Slide 96
Prefetching
Parent
Parent Data
Childrens‟ Data
Because all data is linear, we can predict whatmemory will be needed in ~400 cycles and prefetch
Unified L2 Cache
Slide 97
•Tuner scans show about 1.7 cache misses per node.
•But, these misses are much more frequent
–Code/cache miss/cache miss/code
–Less stalling
Slide 98
Performance
19.6 -> 12.9 -> 4.8ms
Slide 99
•Data accesses are now predictable
•Can use prefetch(dcbt) to warm the cache
–Data streams can be tricky
–Many reasons for stream termination
–Easier to just use dcbtblindly
•(look ahead x number of iterations)
Prefetching
Slide 100
•Prefetcha predetermined number of iterations ahead
•Ignore incorrect prefetches
Prefetchingexample
Slide 101
Performance
19.6 -> 12.9 -> 4.8 -> 3.3ms
Slide 102
•This example makes very heavy use of the cache
•This can affect other threads‟ use of the cache
–Multiple threads with heavy cache use may thrash the cache
A Warning on Prefetching
Slide 103
The old scan
~22ms
Slide 104
The new scan~16.6ms
Slide 105
Up close
~16.6ms
Slide 106
Looking at the code (samples)
Slide 107Performance counters
Branch mispredictions: 2,867 (cf. 47,000)
L2 cache misses: 16,064 (cf36,000)
Slide 108
•Just reorganising data locations was a win
•Data + code reorganisation= dramatic improvement.
•+ prefetchingequals even more WIN.
In Summary
Slide 109
•Be careful not to design yourself into a corner
•Consider data in your design
–Can you decouple data from objects?
–…code from objects?
•Be aware of what the compiler and HW are doing
OO is not necessarily EVIL
Slide 110
•Optimise for data first, then code.
–Memory access is probably going to be your biggest bottleneck
•Simplify systems
–KISS
–Easier to optimise, easier to parallelise
Its all about the memory
Slide 111
•Keep code and data homogenous
–Avoid introducing variations
–Don‟t test for exceptions –sort by them.
•Not everything needs to be an object
–If you must have a pattern, then consider using Managers
Homogeneity
Slide 112
•You are writing a GAME
–You have control over the input data
–Don‟t be afraid to preformatit –drastically if need be.
•Design for specifics, not generics (generally).
Remember
Slide 113
•Better performance
•Better realisation of code optimisations
•Often simpler code
•More parallelisable code
Data Oriented Design Delivers
Slide 114
•Questions?
The END
Pure Object-Oriented Languages
Five rules [Source: Rizwan blog]:
• Everything in an object.
• A program is a set of objects telling each other what to do by
sending messages.
• Each object has its own memory (made up by other objects).
• Every object has a type.
• All objects of a specific type can receive the same messages.
Java breaks some of these rules in the name of efficiency
Abstract Data Type
(ADT)
• An ADT is a collection of objects (or values) and a
corresponding set of methods.
• An ADT encapsulates the data representation and makes data
access possible at a higher level of abstraction.
• Example 1: A set of vehicles with operations for starting,
stopping, driving, get km/liter, etc..
• Example 2: A time interval, start time, end time, duration,
overlapping intervals, etc.
Encapsulation and Information Hiding
Data can be encapsulated such that it is invisible to the "outside
world".
• Data can only be accessed via methods.
Data
Function
Function
Function
Data
Method
Method
Method
Procedural ADT
Encapsulation and Information Hiding, cont.
• What the "outside world" cannot see it cannot depend on!
• The object is a "fire-wall" between the object and the "outside
world".
• The hidden data and methods can be changed without affecting
the "outside world".
Hidden data and methods
Client interface
An object Visible data and methods
Class vs. Object
Class• A description of thecommon properties of aset of objects.• A concept.• A class is a part of aprogram.• Example 1: Person• Example 2: Album
Object
• A representation of the
properties of a single
instance.
• A phenomenon.
• An object is part of data
and a program execution.
• Example 1: Bill Clinton,
Bono, Viggo Jensen.
• Example 2: A Hard Day's
Night, Joshua Tree, Rickie
Type and Interface
• An object has type and an interface.
Account
balance()
withdraw()
deposit()
Type
Interface
• To get an object Account a = new Account()
• To send a message a.withdraw()
Instantiating Classes
• An instantiation is a mechanism where objects are created from
a class.
• Always involves storage allocation for the object.
• A mechanism where objects are given an initial state.
Static Instantiating
• In the declaration part of a
program.
• A static instance is implicitly
created
Dynamic Instantiating
• In the method part of a
program.
• A dynamic instance is
created explicitly with a
special command.
OOP: Introduction 10
Interaction between Objects
• Interaction between objects happens by messages being send.
• A message activates a method on the calling object.
• An object O1 interacts with another object O2 by calling a
method on O2 (must be part of the client interface).
“O1 sends O2 a message”
• O1 and O2 must be related to communicate.
• The call of a method corresponds to a procedure call in a nonobject-
oriented language such as C or Pascal.
Phenomenon and Concept
• A phenomenon is a thing in the “real” world that has individual
existence.
• A concept is a generalization, derived from a set of phenomena
and based on the common properties of these phenomena.
• Characteristics of a concept
A name
Intension, the set of properties of the phenomenon
Extension, the set of phenomena covered by the concept.
OOP: Introduction 12
Classification and Exemplification
• A classification is a description of which phenomena that
belongs to a concept.
• An exemplification is a phenomenon that covers the concept
Concept
PhenomenonGeneralization and Specialization
• Generalization creates a concept with a broader scope.
• Specialization creates a concept with a narrower scope.
• Reusing the interface!
Concept A
Concept B
specialization
Concept C
Concept D
generalization
Vehicle
Car Truck
Hatchback Station car Sedan Pickup
OOP: Introduction 16
Generalization and Specialization, Example
• Inheritance: get the interface from the general class.
• Objects related by inheritance are all of the same type.
Shape
draw()
resize()
Circle
draw()
resize()
Line
draw()
resize()
Rectangle
draw()
resize()
Square
draw()
resize()Code Example
• Polymorphism: One piece of code works with all shape objects.
• Dynamic binding: How polymorphism is implemented.
void doSomething(Shape s){
s.draw(); // “magically” calls on specific class
s.resize();
}
Circle c = new Circle();
Line l = new Line();
Rectangle r = new Rectangle();
doSomething(c); // dynamic binding
doSomething(l);
doSomething(r);
OOP: Introduction 18
Structuring by Program or Data?
• What are the actions of the program vs. which data does the
program act on.
• Top-down: Stepwise program refinement
• Bottom-up: Focus on the stable data parts then add methods
• Object-oriented programming is bottom-up. Programs are
structure with outset in the data.
C and Pascal programs are typically implemented in a more top-downJava Program Structure
method body
method header
// comment on the class
public class MyProg {
String s = ”Viggo”;
/**
* The main method (comment on method)
*/
public static void main (String[] args){
// just write some stuff
System.out.println ("Hello World"); }
}
variable
OOP: Introduction 20
Java Class Example Car
/** A simple class modeling a car. */
public class Car {
// instance variables
private String make; private String model;
private double price;
// String representation of the car
public Car(String m, String mo, double p) {
make = m; model = mo; price = p;
}
// String representation of the car
public String toString() {
return "make: " + make + " model: "
+ model + " price: " + price;
}
}Byte Code vs. Executable
MyProg.java
Java Virtual Machine
Operating System
Java Class File
MyProg.class
Portable Byte Code
MyProg.cpp
Operating System
Executable myprog.exe
javac MyProg.java
gcc MyProg.cpp
-o myprog.exe
OOP: Introduction 22
History of Java
• 1990 Oak (interactive television, big failure)
• 1994 Java (for the Internet)
Main feature: "Write Once, Run Any Where"
=> wrap the operating system so they all look the same
Designed for
• A fresh start (no backward compatibility)
• "Pure" OOP: C++ Syntax, Smalltalk style
• Improvements over C++ much harder to write a bad program
• Internet programming
Very hard to create a virus
Run in a web browser (and at the server)
• There is a speed issue (from Java 1.3 and up much better)Difference from C/C++
• Everything resides in a class
variables and methods
• Garbage collection
• Error and exception handling handling
• No global variables or methods
• No local static variables
• No separation of declaration and implementation (no header
files).
• No explicit pointer operations (uses references)
• No preprocessor (but something similar)
• Has fewer "dark corners"
• Has a much larger standard library
OOP: Introduction 24
Summary
• Classes are "recipes" for creating objects
• All objects are instances of classes
• An ADT is implemented in a class
• Aggregation and decomposition
“has-a” relationship
• Generalization and specialization
“is-a” or “is-like-a” relationship
• Encapsulation
Key feature of object-oriented programming
Separation of interface from implementation
It is not possible to access the private parts of an object
what is the History of DATA STRUCTURE IN C++ ?
Characterizing History Independent
Data Structures
In the early 1970s, Dennis Ritchie of Bell Laboratories was engaged in a project to develop a new operating system. Ritchie discovered that in order to accomplish his task he needed the use of a programming language that was concise and that produced compact and speedy programs. This need led Ritchie to develop the programming language called C.
In the early 1980's, also at Bell Laboratories, another programming language was created which was based upon the C language. This new language was developed by Bjarne Stroustrupand was called C++. Stroustrup states that the purpose of C++ is to make writing good programs easier and more pleasant for the individual programmer. When he designed C++, he added OOP (Object Oriented Programming) features to C without significantly changing the C component. Thus C++ is a "relative" (called a superset) of C, meaning that any valid C program is also a valid C++ program.
There are several versions of the C++ language, of which Visual C++ is only one. Other dialects include Borland C++, Turbo C++, and Code Warrior (Mac). All of these software packages enable you to create computer programs with C++, but they all implement the C++ language in a slightly different manner. In an attempt to maintain portability of both the C and C++ languages, the American National Standards Institute (ANSI) developed a standard of consistency for C and C++ programming. While we will be working primarily with this ANSI standard, we will also be examining the idiosyncrasies of Microsoft Visual C++ 6.0.
Note: Due to their power and ease of use, C and C++ were used in the programming of the special effects for Star Wars.
Jason D. Hartline1, Edwin S. Hong1, Alexander E. Mohr1,
William R. Pentney1, and Emily C. Rocke1
Department of Computer Science, University of Washington, Seattle, WA 98195.
{hartline,edhong,amohr,bill,ecrocke}@cs.washington.edu
Abstract. We consider history independent data structures as proposed
for study by Teague and Naor [2]. In a history independent data structure,
nothing can be learned from the representation of the data structure
except for what is available from the abstract data structure. We show
that for the most part, strong history independent data structures have
canonical representations. We also provide a natural less restrictive definition
of strong history independence and characterize how it restricts
allowable representations. We also give a general formula for creating dynamically
resizing history independent data structures and give a related
1 Introduction
On April 16, 2000, the New York Times published an article regarding the CIA’s
role in the overthrow of the Iranian government in 1953. In addition to the article,
the Times’ website posted a CIA file from 1954 which detailed the actions of
various revolutionaries involved in the plot. The Times opted to black out many
of the names mentioned in the document; some of the people referred to were
still alive and residing in Iran and could have been put at risk for retribution.
The file was published as an Adobe PDF file that contained the original document
in its entirety and an overlay covering up parts of the document. Shortly
after releasing the document, some Internet users reverse engineered the overlay
and made the original document available on the Web. In an environment
where information is valuable, private, incriminating, etc., data structures that
retain information about previous operations performed upon them can cause
considerable problems; the Times’ blunder represents a particularly grievous instance
of this. History independent data structures are designed not to reveal
any information beyond that necessarily provided by the contents of the data
structure.
The idea of maintaining a data structure so that no extraneous information is
available was first explicitly studied by Micciano [1]. This work studied “oblivious
trees” where no information about past operations could be deduced from the
pointer structure of the nodes in the search tree. In [3], Snyder studied bounds
on the performance of search, insert and delete functions on uniquely represented
data structures, which employ a canonical representation for each possible state
of the data structure. More stringent history independence requirements were
studied by Naor and Teague in [2]. In their model, the entire memory representation
of a history independent data structure, not just the pointer structure, must
not divulge information about previous states of the data structure. Following
[2] we consider two types of history independence: weak history independence, in
which we assume that a data structure will only be observed once; and strong
history independence, in which case the data structure may be observed multiple
times. A data structure is history independent if nothing can be learned from
the data structure’s memory representation during these observations except for
the current abstract state of the data structure.
In Section 3 we give a simple definition of strong history independence. In the
full version of this paper we show that this definition equivalent to that of [2].
Under this definition, any strong history independent implementation of a data
structure must satisfy a natural canonicality criterion (Section 4)
. For example,
a strongly history independent implementation of a hash table has the property
that up to randomness in the initialization of the hash table, e.g., the choice of
hash functions, the hash table’s representation in memory is deterministically
given by its contents. This answers an open question posed in [2] about the
necessity of canonical representations.
In Section 5 we consider a natural relaxation of strong history independence,
where non-canonical representations and randomness can be used. However, we
show that even under this less restrictive definition, there are still very stringent
limitations on using non-canonical representations.
Finally, in Section 6 we discuss the issue of creating dynamically resizing
history independent data structures. We give a general technique for dynamically
resizing weak history independent data structures in amortized constant
time against a non-oblivious adversary. We prove that no such technique exists
for strongly history independent dynamically resizing data structures. This
result provides insight into the open problem of whether there is a complexity
separation between weak and strong history independence [2].
Subscribe to:
Comments (Atom)

.jpg)


.jpg)
