5 For the memory pool manager I can have multiple base addresses. The base
6 address would be used as a means to determine the address where data is
7 placed and would be used for pointers in objects for example.
11 I was thinking of having the memory pool manager handle object allocations
12 and such, however I believe instead that should be placed in another project.
13 This way the object in memory management can be shared by the kernel and the
14 interpreter potentially.
18 I need a package which can handle comparison and other operations of unsigned
23 For the memory pool, it should be used by the manager and then associated with
24 the kernel potentially.
28 Going to need a common and generic object manager. Monitors and locks can be
29 managed by atomic read/writes of values. So the memory pool will also need
30 compare and set/test or similar for the native types.
34 Hopefully 16 bytes reserved at the start of the memory pool is sufficient to
35 handle virtualized atomic operations and such (in case there is no native
40 Actually that would be a bad idea. The memory pool should just be a memory
41 pool which can be read and written to. An object manager can reserve space and
42 such. This way the memory pools can shared with the simulator and such.
46 The atomic operations also cannot be in the abstract pool either because the
47 reserved bytes are gone now.
51 I can test differently sized pointer values in the interpreter. However one
52 thing to consider is that this would limit the interpreter's maximum amount
53 of memory. Each instance of a loaded class for each virtual machine would need
54 the `Class` object allocated for classes along with their static fields.
58 So the question is, do I allocate stacks used in the interpreter using the
59 memory pool? If I do then that means saving the current state of execution is
60 virtually just limited to storing the PC address in the currently executing
61 method, the stack pointers, and a few other details. With this model, there
62 actually could be no local variables used and where the code executes in a way
63 where everything is on the stack. Local variables as in registers.
67 I wonder what the maximum stack size I can run my current code instance in with
68 is. This would at least be using JamVM.
70 * 2K: Crashes before start
71 * 4K: Overflows at net.multiphasicapps.squirreljme.kernel.impl.jvm.JVMKernel.
72 internalClassUnitProviders(JVMKernel.java:72)
73 * 5K: Overflows at net.multiphasicapps.util.huffmantree.HuffmanTree.traverser
74 (HuffmanTree.java:405)
77 The stack sizes are dependent on the VM itself however. From all of the
78 overflows they are all essentially happening in the class loader.
82 What I can do however, is if the stack is too small, it can be extended to
83 be used in another allocated region (which would be locked). So really the
84 object manager would be more than just objects and arrays, it would also have
85 to handle stacks and possibly temporary executable code fragments (which were
90 Stack growing across extensions would technically allow stacks that are really
91 low in the execution space to be moved around and potentially swapped away or
92 compressed. That could be a bonus for speed usage. Due to the way Java works
93 no other method refers to another method's stack entries. So this can actually
94 be used as a memory based optimization. Also 6K is not enough to run the Java
95 compiler. 8K works until the kernel has to be built. When GC is performed
96 however, the stacks will need to be swapped in, decompressed, and locked to
97 determine which objects exist on it. Using the previous plan of having a
98 duplicated object storage space would make determining which objects are
99 actually referenced quite simple rather than finding out another way of a value
100 that contains an `int` value points to an actual object.