Wednesday, January 17, 2018

Logos data & Time space dimensional relations.

Logos data & Time space dimensional relations.

Accrued data from the Ligo data seems to prove that gravity waves bounce around the particulate mass of the universe..
because the point map of the wave in 3D provides the picture,
That the data is imprecise and appears refracted though masses..

We can infer that the wave bends within the frame work of gravity & mass...

What precisely that signal is bouncing off of is unclear at this point,
To me that is probably the density of sub and higher space.. and or potentially mass interaction..
Logically this means that mass is held within a 3/4 etcetera dimensional web (space time),(Space (3),Time 1²Variable,Energy 1²xD,Mass,Quantum Fluctuation,Polarisation(spin),.

Not only held by but logically placed within that web; Statically in point position within the web..
What i mean by this is that the points within the web,
Are of a specific length to the same mass and also flexible relative to mass,
Thusly could potentially relate to black holes being in time relative matrix.

Mass creating waves is found in a location within space-time & data from radio waves (Suffering distortion or direction shifting parameters; Would move subtly & also shift in value or direction; Elongate or shorten in bandwidth (possibly temporarily).

Any measurement we can think of for the effects that gravity waves have are problematic because the variable is believably small, However the small details matter.

Relating the speed of light in comparison to the speed of gravity waves,
Tell us about the observable density of space and also about what we would call relational strings in space time.

Also two video's : the discovery of gravity wave from merging black holes at Ligo:
New Scientist 



While we encounter many measurable energy patters within our reality; So many of them require considerable effort to understand & also to measure,
However thankfully we recorded in great details from the 1950's onward & in every direction.

Underground detectors are able to detect both energies & movement; Therefor we are able to examine many factors, Examples are : Gravity waves, Universe or galaxy motion in space time due to vibration or mass & movement, Light,Radio (The electromagnetic spectrum), Energy value.

The reality is experimentation costs are ever larger & budgets will have to match, However the rewards for the scientific work may include : saving lives, progress, space ships, better technology (as examples)


Thankfully NGO Science collaborations like : &  are involved in examining the data we collect.

Paper on examining spectral interference by gravity wave by E@H.

The radio wave spectrum of the data set Cas A, Vela Jr. and G347.3 searches can & will be examined for other properties of the radio search; Such as the emission spectrum of the stars & also the type of signals received..

Such as the modulation of the waves where such data as we all need about stars will be found.. If only to find data that avoids error in SETI or spectral analysis for contained matter.

For Example the wave bands of the 3 neutron stars are 90% match for one another at the level of the spectrum graph.

However there are a few differences between the stats of the stars.. Maybe accounting for mas/age content & energy values, We can compare the 3 in a matrix to find the variable examination compared to known size, Spin rate & other factors for further research on size and age variable data.


integer floats with remainder theory

integer floats with remainder theory - copyright RS

The relevance of integer floats is that we can do 2 things:Float on Integer instruction sets at half resolution 32Bit Int > 16Bit.16Bit > 24Bit.8Bit > 28Bit.4BitRemainder theorem is the capacity to convert back and forth with data.

RAM/Memory and hard drive storage are major components & we also need to consider compression & color formats like DOT5

Machine learning & Server improvement: Files attached please utilise.

Integer_Float Remainder op Form 1:(c)RS

The float is formed of 2 integers...

One being the integer and the remainder being the floating component....

thusly we need two integers per float for example 2 32bit integers will make one single float instruction....

integer A : Remainder B

A + B = float
(A + B) x (A²+B²)

= float C dislocating A and B by a certain number of places = a float that travels as the integer.

Expansion data sets:

A1 : B1
A2 : B2
Ar : Br

F1 : Bf1
F2 : Bf2
Fr : Bfr

A : Integer
F : Float
r : Remainder

The data set expansion can be infinite and the expansion of the data set doubles the precision,
With the remainder... infinite computation = infinite precision.

Not only that but the computation can be executed as an Integer or as a float or indeed with both.
Relevance is that on computers there are a lot of integer registers; Float also..
Also the data can be compressed in ram without using larger buffer widths.

copyright Rupert Summerskill

COP-Roll : (c)Rupert S

ROLL Operation Syntax : RS :(Integer & Float)

Processing Cache (displacement) Operation Roll Arithmetic Maths : For
Multiplication, Division, Addition & Subtraction : P-COR-SAM

Addressable by Compiler update, Firmware update, CPU/GPU Rules
Firmware, Bios, Operating System & Program/Machine Learning.

Machine Learning will considerably enhance Cache & Processor routine
operations efficiency & make rules for all developers & firmware

AI Machine Learning Optimization :

In a single loop a multiply of a float point precision of under 1 for example 0.00001 requires that:

In Integer float :

Multiply of a sum such as 15.05 * 3 is 2 operations:
(15 x 3) + ((roll 0.05 left 2 places)*3) = R=(5 x 3) + 45

In other words : 2 storage values R remainder (the float component) & the number,
However multiplication of a float such as 0.01 is a division in one example & a multiply roll in another,

Roll is a memory operation in CPU terms & is a single processor loop push

In all operations where division is banned we have to decide whether the operation is multiples or division of base value 10 or 1,10,100>,

Such an operation can be carried out by addition or subtraction or roll, Values such as 200* ,
Require multiple additions under the multiply is banned principle.

Multiple sets of memory arrays in a series parallel is the equivalent of multiplication through addition,

Subtraction through addition requires inverting the power phase of a single component array.

Thus we are able to addition and subtraction all sums ? traditional math solves have done this before,
Roll operations are our fast way to multiply;

However arrays of addition & subtraction are a (logical fast loop)..
Full Operation in a single cycle, Because there is no sideways roll.

However direct memory displacement between 010100 & 101000 can use a cache to displace a 1,
Such an arrangement such as a 4 digit displacement cache to roll the operation on memory transfer.

Displace on operation (cycle 2) does minimize operations.

Having that cache further up the binary pipeline does reduce the number of roll cache modifier buffers that we need,

However the time we save & the time we lose & the CPU space we lose or gain.. depends specifically how limited the Roll Cache is.

Integer_Float Remainder op Form 2:(c)RS

32Bit (2x16Bit) is the most logical for 32Bit registers
64Bit (2x32Bit) is the most logical for 32Bit registers

Byte Swap operation
Byte Inversion operation

For example DWord: 8

2 x DWord: 8 Bit Integer & 8 Bit 4 roll places & 4 Bit Value.
Displacing the value 4 bits in 8 makes the value an integer,
Alternatively Adaptive maths adds 0 as for example multiplication & removes it afterwards..
The usage of adaptation takes the second DWord & effectively makes it an accurate remainder.

In that example i believe one less operation is needed in the 16Bit example,

Operation example 2 uses an embedded multiply x 10 &  divide after (to get resulting float)

32Bit memory space: 2x 16Bit Value, 1 Integer 16Bit & 1 0. value,
That can effectively be displaced 16 decimal places

The maths required as displayed above require inverting Multiply & Division,
For Mul & Div Ops on remainder; However does not when used finally:
In the FLOAT Unit FPU for large precision maths

This allows fully Integer CPU to do Float maths and store them as integer..
Both allowing fully the use of all registers & also storage as purely Integer_Float,
It also allows Full cache usage for SiMD,AVX & Vector Units.

Byte Inversion simply allows Byte Swap & Inversion to fully realise performance improvements..
& Also Byte Inversion maths.

SiMD,AVX,Vector : ByteSwap,Invert,Mul,Div etcetera Ergo Float compatible & Acceleration
Float : High Precision finalisation .. Lower Frequency = More potential
Integer + Byte Functions : Pure Acceleration with minimal loss Core Function utilisation

This is all algebra; Categorically.

(c) Rupert S

Optimisation & Use:


Multi-line Packed-Bit Int SiMD Maths : Relevance HDR, WCG, ML Machine Learning (Most advantaged ADDER Maths)

The rules of multiple Maths with lower Bit widths into SiMD 256Bit (example) 64Bit & 128Bit & 512Bit can be used

In all methods you use packed bits per save, so single line save or load, Parallel, No ram thrashing.

You cannot flow a 16Bit block into another segment (the next 16Bit block)

You can however use 9 bit as a separator & rolling an addition to the next bit means a more accurate result!
in 32Bit you do 3 * 8bit & 1 * 4Bit, in this example the 4Bit op has 5 Bit results & The 8Bit have 9Bit results..
This is preferable!

2Bit, 3Bit, 4Bit Operation 1 , 8Bit Operations 3: Table

4 : 1, 8 : 3

4 : 2, 8 : 6
2 : 1, 7 : 8
3 : 1, 8 : 1, 16 : 3

Addition is the only place where 16Bit * 4 = 64Bit works easily, but when you ADD or - you can only roll to the lowest boundary of each 16Bit segment & not into the higher or lower segment.

A: In order to multiply you need adaptable rules to division & multiply
B: you need a dividable Maths unit with And OR & Not gates to segment the registered Mul SiMD Unit..

In the case of + * you need to use single line rule addition (no over flow per pixel)..
& Either Many AND-OR / Not gate layer or Parallel 16Bit blocks..

You can however painful as it is Multi Load & Zero remainder registers & &or X or Not remainder 00000 on higher depth instructions & so remain pure!

8Bit blocks are a bit small and we use HDR & WCG, So mostly pointless!

We can however 8Bit Write a patch of pallet & sub divide our colour pallet & Light Shadow Curves in anything over 8Bit depth colour,

In the case of Intel 8Bit * 8 Inferencing unit : 16 Bit Colour in probably (WCG 8 * 8) + (HDR 8 * 8) Segments,

In any case Addition is fortunately what we need! so with ADD we can use SiMD & Integer Today.

Rupert S


Main Operation solves: Bit-Depth Conversions & Operations

The storage of multiple bit operations with Sync Read & Write,
The purpose of this is to Read, Write & Store Operations on:

F16, F32, F64

In RAM of 32Bit, 64Bit, 128Bit

Values Storage Table

32Bit = [16bit:16Bit]
32Bit = [8bit:8Bit:8bit:8Bit]
32Bit = [4bit:4Bit:4bit:4Bit:4bit:4Bit:4bit:4Bit]

64Bit = [32bit:32Bit]
64Bit = [16bit:16Bit:16bit:16Bit]
64Bit = [8bit:8Bit:8bit:8Bit:8bit:8Bit:8bit:8Bit]
64Bit = [4bit:4Bit:4bit:4Bit:4bit:4Bit:4bit:4Bit:4bit:4Bit:4bit:4Bit:4bit:4Bit:4bit:4Bit]

128Bit = [64bit:64Bit]
128Bit = [32bit:32Bit:32bit:32Bit]
128Bit = [16bit:16Bit:16bit:16Bit:16bit:16Bit:16bit:16Bit]
128Bit = [8bit:8Bit:8bit:8Bit:8bit:8Bit:8bit:8Bit:8bit:8Bit:8bit:8Bit:8bit:8Bit:8bit:8Bit]
128Bit = [4bit:4Bit:4bit:4Bit:4bit:4Bit:4bit:4Bit:4bit:4Bit:4bit:4Bit:4bit:4Bit:4bit:4Bit:4bit:4Bit:4bit:4Bit:4bit:4Bit:4bit:4Bit:4bit:4Bit:4bit:4Bit:4bit:4Bit:4bit:4Bit]

Bear in mind that Integer 64Bit is 2 x 32Bit on AMD; So you can compute 2 operations at 32Bit per 64Bit operation,
Some 64Bit units are only 64Bit; So we need to know how many!

32Bit operations are fine! & Conversion of 16Bit value ranges into 32Bit Operations can still be within range of 16Bit Storage..
If we stick within the 16Bit value range on Multiply & ADD,
We can therefore simply post a 16Bit value range data set & expect to be able to Store 16Bit!

The simple method is to store 2 16Bit values in the same 32Bit table; like [16bit:16Bit] = 32Bit

With this we can Load, Store, Run & Save 8bit INT8 operations in 32Bit devices such as Alexa as 8bit x 4 = 32Bit, So we don't Waste RAM or resources!

But we still have access to 32Bit RAM Paging; But with values loaded in 4Bit, 8Bit, 16Bit, 32Bit & so on.

With NANO Android on F16 & F32 & MIPS the same & AMD, Intel, NVidia,
Learning F16 offers considerable value for performance with 16M Values!


Direct DMA 32Bit & 64Bit RAM : Multiple Sync 16Bit Texture:

A good example of where 8Bit & 16Bit Value load works well is in the case of the texture,
To load 4 x 16Bit into a single 64Bit Cache:

32Bit RAM = 16Bit, 16Bit
64Bit RAM = 16Bit, 16Bit, 16Bit, 16Bit
128Bit RAM = 16Bit, 16Bit, 16Bit, 16Bit

In the case of direct DMA, you would be aware that you have,
128Bit, 192Bit Buss on GPU
32Bit & 64Bit on CPU

So a direct 4 * 32Bit or 2 * 64Bit Cache loads is a logically fast method to DMA directly from Cache to GPU!
In short you convert 8 x 16Bit into a 2x 64Bit DMA push; Which is very fast!

You can do the same with batches of vertices in many storage sizes.



On the subject of how deep a personality of 4Bit, 8Bit, 16Bit is reference:


HDR Pure flow (c)RS
data is converted from the OS to gpu in compressed & optimized memory data into dithered & optimized smooth; precise rendering in every compatible monitor and other device..
the reason we do this is flow control and optimization of the final output of the devices, also the main chunk of data the os used is transparently the best,
In 5D,4D,3D & 2D data and can thusly be pre compressed and cache optimized & rendered.

Combat grace

Combat grace
the creation of ideal minds for sports and the energy combat gains from nature.

In the photograph we can see two boxers training, We see potential action in the moment..

One would ask the community what would we do ourselves in combat?

As Children we may find experience though exploration and here we display the finest attribute of potential a man or woman express; The potential to adapt to the environment we live in.

Commonly we create the scenario in our minds, Here we do and doing so we progress in experience.
Training is like a dream that directs the future.

"Heavyweight stars Gary Mason and Frank Bruno spar at the The Royal Oak, Canning Town carefully watched by Jimmy Tibbs potential, power & vibrant grace. "

Thursday, January 4, 2018

Microprocessor bug Meltdown

VM and Microprocessor bug fixes incoming..
hopefully microcode quickly also.

Creating a better virtualization header that is:
More efficient at isolating the contained OS with attributes in the OS's to contain secured data?
We find answers to improve efficiency and protect against VM>VM data transfer or to use this for a creative purpose!

We need answers! and science. : Microcode update

"First responder RS"

"Thank you for googles firm responses to the bug, faith in google is high..
The micro code be updated to flush & or contain the the speculative data in a data-cycle secure storage,
Within the framework of cache and ram/virtual-ram?
cycle efficiency would be at most two cycles and a flush Xor bit data overlay,

Bit Masking before and after pre-fetch presents & also uses data - this method would be fast! (c)Rupert S"

"Obviously in light of buffer exploitation we would suggest that buffers after password entry are cleared, This is not the whole solution because the spy program could be resident..

Buffer exploitation is a common practice in viruses and this type of attack is nothing new..
There is no doubt that buffers are a victim of flooding and exploitation; Over and over!
After all buffer exploitation is a logical consequence of their use on a computer or hardware.

Randomizing buffer allocation, Location and encryption algorithm is the most logical choice on hardware, However! how much effort must be made to protect buffers when an attack on them is predicable and logical? A lot we say.

(c)Rupert S"

Google systems have been updated for Meltdown bug

attack mitigation -

"Microsoft issued an emergency update today,
Amazon said it protected AWS customers running Amazon's tailored Linux version and will roll out the MSFT patch,

for other customers to day"

We need answers! and science. : Microcode : update

(c)RS - examination and findings direction of HPC Development - will Random/Entropy drivers help - function examined and processed.

about the bug :,36219.html


A detailed and interesting article with many details; Well written. (12 jan 2018)

Gaming performance:

Gaming performance:

report PDF on mitigation - (requires signing) :

AMD's concern for security lead them to make cache work differently right from the start; Where as Intel chose to pre-fetch kernel & secure data on the presumption that this could rarely be used.(this was published in the past we read about it.) RS

As we can see AMD has a security focus & did also in 2005 when pre-fetch method came up for debate.


"Details of a problem have been gradually emerging, People with AMD Athlon-powered computers say that following the installation of the patch, it is impossible to boot into Windows leaving a full re-installation as the only option -- although some users report that even this does not fix the problem. "

(possibly related to the antivirus program incompatibility)(some AV's possibly! we need a list preferably now.)

Athlon PC patch is being re engineered so that it works on windows 10 - not related to newer AMD chips:

Intel information with sub-tabs (of interest)

On the front of the kernel patch 4.4.0-108 (Ubuntu) bricking some older Athlon models apparently ...
4.4.0-109 is the fixed version; Further information would be useful but is currently too hush hush for full disclosure. - google 4.4.0-109 for more information.


on the GPU front we can see that since cache pre-fetch is the issue that all classes of GPU/CPU & other processor class with cache may well face issues.

Crypto Keys need replacing due to meltdown bug - after patching!
due to system compromise. (c)RS

Meltdown and specter security Firmware update is more important to bitcoin, Crypto coin / Crypto coin wallets & block-chain than the price! read it now and Update

Firmware Updates and Initial Performance Data for Data Center Systems - information on intel,AMD & other components

HPC View of Meltdown and a few patch updates
AMD affirmative patch inbound to secure lesser risk in conscientious market. - good update

As of 23/01/2018 Intel patch to CPU has as yet failed to be fully effective against system instability caused by unexpected side effects :
Further improvements sought, One suggests a better cohesive response between Low Level OS companies like Redhat Linux, Microsoft, apple and android with Hardware developers - interactive people, RS

Power 7/8/9 update :

01/02/2018 - additional AMD patch - Windows 10 Build 16299.214 :,36440.html
15/02/2018 - fixed patch - Top Applications speed test of patches on Stamped - Texas University - Linux


Looks like the Israeli company is asking us to suspect firmware ....
Frankly no #hardware could avoid #firmware issues!,
If AMD/INTEL is really being asked to call fake firmware an AMD & INTEL/GPU Manufacturers security flaw.. when this is in the BIOS & not randomly downloaded!

28/04/2018 Microsoft update for windows (7 & who knows!) causing security flaw. - in detail - original sc,36765.html - Retpoline: a software construct for preventing branch-target-injection

03/05/2018 - Apparently a new wave of specter variant bug's appear to be in the process of being patched - ARM & Intel as to others we know not at the moment!
250000$ reward on offer from Microsoft for flaw solution + security bug.

Windows 10 version 1803 is out now since the 30th April but you have to manually update though the update tool!
To get this super HDR version of windows with better hardware support, truly super !

Enable Windows Bugfix bat - run admin

15/05/2019 - Zombie load bug : Intel : said to slow down processors especially with java

How do we avoid the performance loss? Believable solve

Essentially we have to make speculative load cache private to the operating system at a minimum, essentially we can still use masked data load above the system but we need to verify the task ID and PID and where possible tab/Window or process ID.

Essentially we need to trim the dataset to the process in a tree ML.

Processor : Privileged execution by kernel : By application list & Privilege level in regard to the recurved data.

Memory Containment is not just prefetch stack but also system, OS & Process.

Mitigation by security dam , Masking data & Antivirus software.

Update 14/05/2019 or later installed and all VM's need to be shutdown and restarted & updated according to Microsoft post.


(c)Rupert Summerskill