Sunday, May 26, 2019

Compiler Optimisation : CPU/GPU/Vector/Float : Transparent Open CL Direct compute

Compiler Optimisation : CPU/GPU/Vector/Float : Transparent Open CL Direct compute.

OpenCL, Web CL, Web Compute, Direct compute & Integration of CL code into gaming & web content..
We are able to utilise processing unite types by running the majority of code within a single class,
Such is the case, But we need to optimise the interruption cycle with Open CL Compute,

After all is it not the objective to receive code that compiles well on all?

Proposing to streamline the coding stacks into the Open CL pipeline allows simplification of the complicated task of writing in Open CL & also optimises code is such a way that it is :

Faster, Smaller & safer.

****

To implement the usable and fully functioning implementation of Open CL direct compute,
For the improvement of CPU MMX,scalar,Vector pipelines into the system architecture of future & present systems.

The open CL, Direct compute pipeline is predominantly vector & therefore,
Open CL Direct compute pipelines are by nature compatible with Scalar Vector architecture..
This means CPU,Float,Vector & GPU, This also means that Open CL structures can be directly encoded into scalar vector,GPU & CPU pipelines easily using vectorised compilers,

The advantage of vector pipelines is the CACHE; By this i mean all processor caches,
In type vectorised pipelines are also faster in integer.

While directly vectoring pipelines may be problematic, Mathematically clean balanced vectors are by nature error proof & fast.

Therefore OpenCL Direct compute is implementable by encoding that is to say compiling directly..
to the CPU feature sets & GPU, Vectorised floats are manageable in AVX,Scalar Vector functions and usable by floating point units, non floating point variables are capable of CPU integer encode.

Compiling code directly from languages in GCC & other compilers into Vectorisable variable adaptable code is also manageable in transparent compilation that makes invisible the necessity to compile code that is only usable one single way,

With pages in php with database backing rendered Though:
OpenCL Direct Compute maths scripting the output into Vulkan,ESGL & DirectX..

The usability of Open CL compilers; Enabled to write instructions for Vector processing units & float & Yes even Integer instruction sets such as X64, Is most important.

Function of the Floating unit : FPU : Scalar Vector Unit & Integer ..
Are fully compilable with instruction conversion & microcoded objects.

Allowing the system HPC to convert all available computation unity into realestate for high performance computing and gaming, rendering, Dynamic compiled code objects .. All at the same time.

Since compilers such as GCC shall have object compilers for Open CL Direct Compute code..
Direct from C++ & Fortrans & the objective code will be optimized for compute unite by class,

Float, Integer..

****

Data Example Chart: OpenCL Backed data object hive: data science, web pages and gaming


With pages in php with database backing rendered Though:
OpenCL Direct Compute maths scripting the output into Vulkan,ESGL & DirectX..

Into: object orientated data & charts, Into 2D/3D web pages & charts or diagrams.
Alternatively the rendering of High performance computing can compile and output:

Machine intelligence data, Medical data sets, Bioinformatics & gaming data informatics such as consoles & 3D Renderings.

Into: Web pages; Including JS,PHP & Database that back the dataset hive we need to improve the scientific readability & look.

****

However since float and integer exist together; Coding has become a slippery squiril so to speak..
Quite often floats in Vector code end up being translated from and to integer over and over,
The transfer of code from and to float is quite in-efficient,

However for example floating point definitions of page layout in PHP can lead to errors in web page layout & cost additional ram & page .php file size (however we do use compressed UTF-8 (GZIP),

Only the final form needs to definitely fit into the finite data set of integer or float ..
For rendering in high definition & VR realities we really need the 64bit precision or even float,
Data saved in integers saves money & resources such as storage..

Conversion of float data into integer & Integer into float takes advantage of resource allocations & we do need to ensure that output pipeline is float on higher than HD displays!

We can allocate OpenCL compatible code to integer units quite easily, But as stated we need clear lines between sets of integer code and float for optimisation reasons.

****
https://science.n-helix.com/2018/01/integer-floats-with-remainder-theory.html

https://science.n-helix.com/2016/04/3d-desktop-virtualization.html

https://science.n-helix.com/2017/02/open-gaming.html

Compiler books & reading : https://science.n-helix.com/2017/04/boinc.html

https://www.khronos.org/sycl/

https://llvm.org/

https://gcc.gnu.org/

(c)Rupert Summerskill

Thursday, May 16, 2019

Zombie Load bug update solution

15/05/2019 - Zombie load bug : Intel : said to slow down processors especially with java

https://www.pcgamesn.com/intel/zombieload-mds-vulnerability-security-patch-hyperthreading-mitigation-performance

How do we avoid the performance loss? Believable solve

Essentially we have to make speculative load cache private to the operating system at a minimum, essentially we can still use masked data load above the system but we need to verify the task ID and PID and where possible tab/Window or process ID.

Essentially we need to trim the dataset to the process in a tree ML.

Processor : Privileged execution by kernel : By application list & Privilege level in regard to the recurved data.

Memory Containment is not just prefetch stack but also system, OS & Process.

Mitigation by security dam , Masking data & Antivirus software.

Essentially prefetch data is necessary  for assemblers & coders to optimise the code stack,

However security privilege levels for accessing the code within the entire windows stack is to be prioritised by privacy level.

Programs that optimise the execution priority need access ideally to data on execution timeline & data fetch,

However accessing the applications memory array in random address space needs to be tailored to the type of execution; Who it is by,

Privilege level & the process that created the interception relative to the executed process.

While this may prove measurable in protection; low level kernel executed viruses would still be able to access above..

Masking in the form of up & down privilege priority and task child/father/mother is a complex machine learning theatre of war,

A field of operation requiring kernel & userland advanced & sophisticated cyber security,
Contributing elements such as memory encryption & key data field scrambling/masking from spying; Snooping and virus do also enable payloads to go unfound..

In short solutions that enable privacy for process are also to be enabled for antivirus & security threat detection.

Complex systems of personal protection will also have to scan for code; JS & other applicable code that is out of place within the appliance frame work / stack .. Without compromising the security/privacy we personally seek.

Masking data is a processing task subject to objective fair use policy & usable system operation; Optimisation ability; memory clearing; field reduction or use & re use; Personal & impersonal information or data subsets,

Tasks & management.

(c)Rupert S

Fix-Spectra.bat Enables patches in windows

https://science.n-helix.com/2018/01/microprocessor-bug-meltdown.html

Update 14/05/2019 or later installed and all VM's need to be shutdown and restarted & updated according to Microsoft post.

More details:

https://www.datacenterknowledge.com/security/here-s-how-zombieload-affects-data-centers-and-what-do-about-it

https://software.intel.com/security-software-guidance/insights/deep-dive-intel-analysis-microarchitectural-data-sampling

Update 2: Buffer security strategy


To obtain buffers for one application only..
Extra buffers are deployed, These buffers can be cleaned or contain application specific data.
These are program specific and contain only data for one program.

Remember that clear buffer fetching can be done from a single place involving a single cached fetch cycle and memory location modification on write / Memory Reloc & are to be in level 2 or 3 cache.

Thusally we are able to maintain a clear buffer, After all clear buffers are not program specific so one will do and hence a single fetch by cache.

As stated buffer security plans include localised buffer fetch sets, Application specific & secure.

Strategy 2

Buffer arrangement is tiered in strategy 2

Tier 1 : Tier 2 : Tier 3 : Tier 4 : Tier 5

The same way we draw an ML diagram <:> Core cache : Secure tier 1 : Secure Tier 2

The arrangement can be by PID & father daughter sets & does not necessitate the clearing of the buffer unless this is requires, In the case of a clean buffer a clear standard buffer is already in Cache & is swapped in.

This strategy avoids buffer clearing cycles directly interfering with the program execution cycle,

For buffers are ether program specific in a key ring or already clear / State flushed.

(C)RS