5 All of the invoke operations are essentially the same and since I do not have
6 to handle `invokedynamic` at all, I do not have to worry about the complexities
7 that it brings to the virtual machine.
11 The interface method reference would probably be best as a method, I do have to
12 check them if an interface is used however.
16 Stack overflow and underflow would probably best throw an exception which is
17 of a virtual machine exception.
21 The derivation fallback for the state of operations must not destroy the top
22 of stack elements unless perhaps a specific flag is set.
26 For method calls which return no result, I will still need to generate a
27 method call for it. Thus the atom will need an operator link itself.
31 It would likely be easier if I were to generate the SSA and the specified
32 chains and uniques and such on the fly rather than associating them with the
33 state of things. Essentially, instead of having operator links for locals,
34 stack items, and atoms I would instead have a program chain which performs
35 operations. As for the local and stack variables, I can have a change order
36 for that position (likely the PC address) which when a variable is changed then
37 the variable ID is incremented. Then `JVMProgramSlot.unique()` would go away
38 because that points to the individual slot. `JVMOperatorLink` would also go
39 away. This way the stack and locals would just keep their former states and
40 such. The change order could either be explicit and implicit potentially. If
41 a jump back is made and variables change, then a phi-function could be placed.
42 It would work with implicit IDs, because say the locals do not change at all,
43 then suddenly it does because a jump back is made for a loop, then it gets
44 updated. An alternative to all of this is have something similar, but where
45 each slot is still unique, exception a linear set of operations are performed
46 which describe what a program does. So all operations would reference a
47 unique variable. However if at a given time a variable has not changed value
48 then an older one is used in its place.
52 So for any given address operation, the virtual program would know the
53 operations and the state of variables. It would appear as a waterfall with
54 logs rolling off so to speak. So all operations would use a unique value for
55 one at the given operation and the output of that is for the next operation.
56 So for going backwards when an operation is performed, it must check its
57 inputs to see if there is a change in the output. This would be done
58 recursively for each variable as it is requested. The states of variables
59 could be cached so they can be garbage collected as needed, the variables
60 which do not need to be constant in memory. The ones that do remain constant
61 are the ones which set an actual value.
65 To recap: input variables (which may be virtual), and an operation. If a
66 variable is virtual then it propogates up to find its value based on former
67 operations and such. Virtually all variables would end up being virtual. This
68 would cut on active memory. I can also have a pool of unique values which are
69 known to the entire program. I would have to handle situations where variables
70 are just copied to another place, in that case the unique variable number for
71 both locations would be the same. Then each unique variable would have a set
72 of operations which are performed on it. However, with propogation upwards this
73 would not be needed at all. The copy operation would be the operation itself
74 and it could just return its input as its output, so the unique variable list
75 is not needed. Thus if it eats `local#7@n` then it will for its stack variables
76 have there exactly `local#7@n` despite being in the stack. This would be done
77 instead of having its output be `stack#1@n+1`. The operation would be cached
78 and the copy operation is pointless. When code generation time occurs, the
79 program can allocation registers and stack space. Pointless operations such
80 as copy would not be sent to the generator at all, unless it were really
85 Thinking about it (by not thinking about it), I can merge the byte code with
86 this idea I have. I give it a program and instead of my own operations it
87 uses the byte code. However it also has the cached operations and such which
88 do things. First I get the byte code array, then I get the position of all the
89 instructions. Then I run through them. The class can handle caching and such.
90 It would be a hybrid of an interpreter, with caching, and SSA so to speak. It
91 would also be combined into one. I can also have a cache of inputs and outputs.
92 Then when it comes to code generation, I will have an SSA kind of form and I
93 can just iterate over the byte code and generate code depending on the
94 operations. Only a specific set of operations would need to be handled. If
95 some instructions to special things, I can have an external do something class
96 which describes what it does.
100 Exceptions and other verification details such as the `StackMapTable` can be
105 Just started to implement this, it should in the end result in a cleaner
106 and more pure interpreter in the long run. It would also likely use less
107 memory than what I have done before.
111 I believe for simplicity in the operations I am going to condense the byte code
112 addresses into list addresses. This would make iteration a bit simpler and
113 all instructions would take up a single address, rather than multiple
114 addresses. However, internally they would still take up multiple addresses.