Thursday, March 25, 2021

Upscaling Enhancement

Super resolution API Photo & Video Enhance & upscaling demonstrations

Upscaling Enhancement For Telescopes, Space & Research Aviation Photography & Video
Photo Enhance & upscaling:


Photographic Enhancers:


BloodBorne : "Why not shoot for 4K too? Thus began a week of experiments using a tool called Topaz Video Enhance AI, which uses a number of different AI upscaling models - and it turned out that most of them could deliver appreciably higher detail."



Department of Energy - RGB_Color-Seal_Green-Mark_SC_Vertical V2 Helix.jpg (2.08 MB) https://mirrorace.org/m/3Jz5o

JOE Science Workshop V1 - DcXw0jCU8AA-Jdk.jpg (1.09 MB) https://mirrorace.org/m/3Jz5p

SuperNova image_2144_1e-SN-1993J.jpg (1.74 MB) https://mirrorace.org/m/3Jz5q

XC50 Cray Met Data Test DQ-aoSpUQAAXby-.png (6.72 MB) https://mirrorace.org/m/3Jz5r

**

deadpool V2 3000.jpg (2.81 MB) https://mirrorace.org/m/5LrrU

Such wow art V2 3000 tGi0Ap74NwbRC.jpg (4.09 MB) https://mirrorace.org/m/4pwxi

Friday, March 12, 2021

Brain Bit Precision Int32 FP32, Int16 PF16, Int8 FP8, Int6 FP6, Int4? Idealness of Computational Machine Learning ML TOPS for the human brain

Brain Bit Precision Int32 FP32, Int16 PF16, Int8 FP8, Int6 FP6, Int4? Idealness of Computational Machine Learning ML TOPS for the human brain:

Brain level Int/Float inferencing is ideally in Int8/7 with error bits or float remainders

Comparison List : RS

48Bit Int+Float Int48+FP48 (many connections, Eyes for example) HDR Vison

40BitInt+Float Int40+FP40 HDR Basic

Int16 FP32

Int8 Float16(2Channel, Brain Node)(3% Brain Study)

Int7 (20% Brain Study)

Int6 (80% Brain Study)

Int5 (Wolves (some are 6+))

Int4 (Sheep & worms)

Int3 (Germ biosystems)


Statistically a science test stated 80% of brains in man quantify Bit at 6 20% to 7Bit

XBox X & PlayStation 5 do down to INT4Bit (quite likely for quick inferencing)

Be aware that using 4 bit Int instructions .. potentially means more instructions used per clock cycle & more micro data transfers..

Int8 is most commonly liable to quantify data with minimum error in 8Bit like the Atari STE or the Nintendo 8Bit..

Colour perception for example is many orders of magnitude higher! Or 8bit colours EGA is all we would use..


16Bit was not good enough.. But 32Bit suites most people! But 10Bit(x4) 40Bit & Dolby 12Bit(x4) 48Bit is a luxury & we love it!


(c)Rupert S https://is.gd/ProcessorLasso


Restricted Boltzmann ML Networks : Brain Efficient

I propose that SIMD of large scale width & depth can implement the model :
Restricted Boltzmann Machines (RBMs) have been proposed for developing neural networks for a variety of unsupervised machine learning applications

Restricted Boltzmann Machines utilize a percentage correctness based upon energy levels of multiple node values; That represent a percentage chance of a correct solution,

My impression is that Annealer machine simply utilise more hidden values per node on a neural network,
Thus i propose that SIMD of large scale width & depth can implement the model..

A flexible approach is to experiment with percentages from a base value...
100 or 1000; We can therefore attempt to work with percentiles in order to adapt classical computation to the theory of multiplicity.

SiMD in parallel can; As we know with RISC Architecture .. 
Attempt to run an ideal network composing many times Factor & regression learning model..

Once the rules are set; Millions of independent IO OPS can be performed in cyclic learning,

Without sending or receiving data in a way that interferes with the main CPU & GPU Function..

Localised DMA.

"Adaptive hyperparameter updating for training restricted Boltzmann machines on quantum annealers"

Adaptive hyperparameter updating for training restricted Boltzmann machines on:
Quantum annealers
Wide Path SiMD




"Restricted Boltzmann Machines (RBMs) have been proposed for developing neural networks for a
variety of unsupervised machine learning applications such as image recognition, drug discovery,
and materials design. The Boltzmann probability distribution is used as a model to identify network
parameters by optimizing the likelihood of predicting an output given hidden states trained on
available data. Training such networks often requires sampling over a large probability space that
must be approximated during gradient based optimization. Quantum annealing has been proposed
as a means to search this space more efficiently which has been experimentally investigated on
D-Wave hardware. D-Wave implementation requires selection of an effective inverse temperature
or hyperparameter (β) within the Boltzmann distribution which can strongly influence optimization.
Here, we show how this parameter can be estimated as a hyperparameter applied to D-Wave
hardware during neural network training by maximizing the likelihood or minimizing the Shannon
entropy. We find both methods improve training RBMs based upon D-Wave hardware experimental
validation on an image recognition problem. Neural network image reconstruction errors are
evaluated using Bayesian uncertainty analysis which illustrate more than an order magnitude
lower image reconstruction error using the maximum likelihood over manually optimizing the
hyperparameter. The maximum likelihood method is also shown to out-perform minimizing the
Shannon entropy for image reconstruction."

(c)Rupert S

Example ML Statistic Variable Conversion : Super Sampling Virtual Resolutions : Talking about machine learning & Hardware functions to use it/Run it; To run within the SiMD & AVX feature-set.

For example this works well with fonts & web browsers & consoles or standard input display hubs or User Interfaces, UI & JS & Webpage code.

In the old days photo applications did exist to use ML Image enhancement on older processors..
So how do they exploit Machine Learning on hardware with MMX for example ?

Procedural process data analytics:

Converting large statistics data bases; On general Tessellation/Interpolation of images
The procedural element is writing the code that interpolates data based upon the statistics database...

Associated colours..
Face identity...
Linearity or curvature...
Association of grain & texture...

Databases get large fast & a 2 MB to 15MB Database makes the most sense...
Averages have to be categorized by either being worthy of 2 Places in the database or an average..

You can still run ML on a database object & then the points in the table are called nodes!

Indeed you can do both, However database conversion makes datasets way more manageable to run within the SiMD & AVX feature-set.

However the matter of inferencing then has to be reduced to statistical averages & sometimes ML runs fine inferencing this way.

Both ways work, Whatever is best for you & the specific hardware.

(c)Rupert S

**

DL-ML slide : Machine Learning DL-ML


By my logic the implementation of a CPU+GPU model would be fluid to both..

Machine Learning : Scientific details relevant to DL-ML slide (CPU,GPU,SiMD Hash table(M1 Vector Matrix-table +Speed)

The vector logic is compatible to both CPU+GPU+SiMD+AVX.

Relevant because we use Vector Matrix Table hardware.. and in notes the Matrix significantly speeds up the process.
(Quantum Light Matrix)

The relevance to us is immense with world VM servers
DL-ML Machine Learning Model compatible with our hardware
By my logic the implementation of a CPU+GPU model would be fluid to both..
The vector logic is compatible with both CPU+GPU.

However this is a model we can use & train..
For common core : Rupert S https://is.gd/ProcessorLasso


Saturday, February 13, 2021

Multi Operation Maths - CPU,GPU Computation

Multi Operation Maths - CPU,GPU Computation (c)RS

Performing multiple 4,8,16,32 operations on a 64Bit integer core (The example)



Kind of an F16 operation & Integer 16 or Int8 if you need it, With careful management and special libraries ..
Capable of speeding up PC,Mac & Consoles :HPC:
Requires specially compiled libraries so compiled codes can be managed & roll ops assessed.

Rules:

All operations need to be by the same multiplication

Rolls usable to convert value for example Mul & Division


For example :


451 722 551 834 x 6


In the case of non base factor roll numbers

We have to fraction the difference between the value and our base roll number,

10 for example and 6, So the maths is convoluted & may not be worth it,

Could do 6 + rolls & then -Rolls

On a 10 processor the first factor would be 10x because we could compensate by placement

But we still need space to expand the result to the right or left

0451072205510834 x 10 =

4510722055108340

or 4510 roll -12
7220 roll -8
5510 roll -4
8340 no roll

Converting base 10 to & from hex may make sense

Depending on the cost of roll; This operation may be worth it!

This operation is in Base 10 & 8Bit makes more sense mostly for common operations in hex..

But 8 is not a very big number for larger maths & 16Bit makes more sense; Because it has a larger number.

Performing numeric expansion:

consoles in particular and FPU where expansion is required for emergence mathematics

Performing numeric expansion for circumstances where we require larger numbers for example:

To fill the 187 FPU buffer..

To do that we will roll to the left & expand the number, although we may need multiple operations..

Like i say : Roll + or Roll -

1447000
-Roll 3 = 1447
or
+Roll 3 = 1447000000

That way we can potentially exceed the Bit Depth 32Bit for example.

Rupert S https://science.n-helix.com


*****

Packed F16C & F16 Values in use on CPU & GPU - RS

F16C & F16 : lower precision values that are usable to optimise GPU & CPU operation that involve; Less Detailed values like Hashes or game data Metadata or Machine Learning : RS

Firstly the F16C is the FX 8320E supported instruction so the CPU can potentially use packed F16 Float instructions directly from the CPU,
As quoted F16 carefully managed produces a pipeline that is 100% F16..

Packed F16 instructions use 2 data sets per 32Bit storage register...

Data is converted if the array of instructions includes F32 & commonly all F16 should be present first; Before group conversion or alternatively...

Allocating an additional 16Bits of data for example 0000x4 or subtle variance data that allows unique renders... Such as a chaos key or Entropy / RNG Random data...

Potentially allocating a static key in the form of AES Output from base pair F16c Value...

The additional data make potentially each game player render unique!

Fast Conversion Proposals include:

Unique per player additional data (AES Key conversion for example, Or DES; DES Produces smaller faster values)

Static key, Sorted by data type (Base on player profile or Game map)

Dynamic Key

0000 or empty buffer hash

Side by Side : Wide format texture = 2xF16 Value on same 32Bit Value
Top & Bottom : F16 Double layered format texture = 2xF16 Value on same 32Bit Value

Yes transparency for alien skin can use : Top & Bottom F16 layered texture
Machines also; Or even 4 layers for a truly special effect.

Combine both methodology and crypto hash with one or more layer of BumpMap RayTracing SiMD

SiMD is also 16Bit compatible so no conversion required.

Weather & clouds are examples perfect for light fast loads over massive GPU Arrays.

F16 are also theoretically ideal for 16Bit audio if using SiMD..

In the case of AVX probably worth using dynamic key conversion..
A Dynamic Remainder key that allows lower bits to interpolate Sound data.

Other object sources such as render can potentially use the F16 system to..
Interpolate or Tessellate bits on shift from F16 to F32 on final plane write to frame buffer..
The memory required would be the buffer & not the source process..

An example is to replace the bits missing from F16 in F32/F64 with tessellation shaping and sharpening code; Dynamically relative to performance of the GPU/CPU...
F16 values obviously transfer from GPU to CPU fast & CPU to GPU..

Image enhancement is also possible with a bitshift stack buffer that passes additional data to the missing bits..
For example pre processed micro BumpMapping or Compute shading process; That will pull the bits in.. Under the F16 data  453000.172000 > 453545.172711 bit swap.. could be complex!
Done with a cache? Entirely possible with united cache L3

DLSS & Dynamic sharpen & Smooth/Filter enhanced virtual resolution .. Can significantly enhance the process..
Of dynamic buffer pipelining to render path. (on requirement benefit)

(c)Rupert S https://science.n-helix.com/2019/06/vulkan-stack.html

https://gpuopen.com/learn/first-steps-implementing-fp16/

Sunday, November 8, 2020

Life Of a "Titan"(Planet)

 Life Of a "Titan"(Planet) C3H2 & C6H6 both react quite well with oxygen (O) & chlorine (Cl)


"NOVEMBER 6, 2020 BY MATT WILLIAMSTitan’s Atmosphere Has All the Ingredients For Life. But Not Life as We Know It
Using the Atacama Large Millimeter/submillimeter Array (ALMA), a team of scientists has identified a mysterious molecule in Titan’s atmosphere. It’s called cyclopropenylidene (C3H2), a simple carbon-based compound that has never been seen in an atmosphere before. According to the team’s study published in The Astronomical Journal, this molecule could be a precursor to more complex compounds that could indicate possible life on Titan."

C3H2 & C6H6 both react quite well with oxygen (O) & chlorine (Cl)
My temporary conclusion is that titan mostly does not have a free radical system in the stratosphere,

Complex carbon's in the higher atmosphere suffering from a complex mix of:

Electrical ionic interference
Tough atmosphere ( Electrically & content of Metallic/Organic/Earth Catalysts dynamic to layers of the sky )
Mass, Weight, Density & excitation

Separation of Carbon;Hydrogen bonds 
Unification of Carbon;Hydrogen bonds 

Ground level complexity may well lead one to conclude a potential for basic bacteria & early earth fractal life (Types of plant/Alga)..

However complex Oxygen breathers? depends on environment (Shielded for example & volcanic;Elements;gases;water Sulphur/Metals & Mineral deposits)

Ionisation would be surprisingly frequent with dry atmosphere static charges & planetary dust,
However with high concentrations of carbon titan is surprisingly suggestive of cosmos dust contamination,

Alternative names for dust contamination are : 
Life generation solar mass explosions (NOVA)Inter System galactic cloud interception..Interplanetary pollination & Asteroid Inception

The ionosphere would mostly contain:

Ionisation would be surprisingly frequent with dry atmosphere static charges & planetary dust

Carbon dioxide
Hydrogen Oxide H²O & a bit lower HO² & HO 
(ionisation from electric interference from local planets & higher frequency light & gamma radiation)

helium (due to lower solar wind it may be captured there)

Rupert S https://science.n-helix.com
https://www.universetoday.com/148722/titans-atmosphere-has-all-the-ingredients-for-life-but-not-life-as-we-know-it/

https://mediarelations.uwo.ca/2020/10/29/titan-dragonfly-life/

https://www.nasa.gov/feature/goddard/2020/nasa-scientists-discover-a-weird-molecule-in-titan-s-atmosphere/

https://www.almaobservatory.org/en/home/

https://wcg.n-helix.com  

https://science.n-helix.com/2020/04/cern.html - Masks & Filters

https://science.n-helix.com/2020/01/coronavirus.html

Additional data for further improvements:


Machine learning & Server improvement: Files attached please utilise.



Saturday, June 13, 2020

CryptoSEED Bug 2020(tm) - Patches chip flaw that could leak your cryptographic secrets : RND

CryptoSEED Bug 2020(tm) - Patches chip flaw that could leak your cryptographic secrets RND Security 2020-06-12

Core RNDSEED Security MEGA BOMB Hits all SSL Certificates #SecurityNEWS #RS

Patches chip flaw that could leak your cryptographic secrets RND Security possibly compromised


"Notably, memory addresses that have been accessed recently typically
end up cached inside the chip, to speed up access in case they’re
needed again soon, because that improves performance a lot. Therefore
the speed with which memory locations can be accessed generally gives
away information about how recently they were peeked at – and thus
what memory address values were used – even if that “peeking” was
speculative and was retrospectively cancelled internally for security
reasons."

flaws are to cache for :

According to the theory of the cure ? Clear cache, As par my meltdown
fix XOR is a potential solution,
However crypto keys frequently seed from cache data.. Leaving a
massive security flaw in the entropy pool.
So what do we do ?
Potentially we AES a pool of Data hashes.. Hashing a key with a re-sort?
In short so like this X = cache buffer For Length 16 to 64KB Fragment
& Sort by Factor Xn

Factor Xn is Ram Fullness = 32Bit Integer * %CPUUsage as fraction of 100

Hash is then plausibly meaningless by viable for RND Random

Factor Xn is then fed into a roll loop to factor Cryptic key RND

Meaning that the cache clear fix can potentially reveal MASSIVE
Random/Entropic pools

(free for Linux Kernels & RND Seed Generators based upon Ubuntu Code (the Ubuntu code can be donations of Support & Development,Money or service & specifically does not have to be GPL but Ubuntu is charitable : RS)

(c) Rupert Summerskill



RDRAND & RDSEED & EGETKEY


CVE-2020-0543
In the case of the Special Register Buffer Data Sampling bug, or
CVE-2020-0543, the internal data that might accidentally leak out –
or, more precisely, be coaxed out – of the processor chip includes
recent output values from the following machine code instructions:

RDRAND. This instruction code is short for ReaD secure hardware RANDom
number. Ironically, RDRAND was designed to produce top-quality
hardware random numbers, based on the physics of electronic thermal
noise, which is generally regarded as impossible to model
realistically. This makes it a more trusted source of random data than
software-derived sources such as keystroke and mouse timing (which
doesn’t exist on servers), network latency (which depends on software
that itself follows pre-programmed patterns), and so on. If another
program running on the same CPU as yours can figure out or guess some
of the random numbers you’ve knitted into your recent cryptographic
calculations, they might get a handy head start at cracking your keys.

RDSEED. This is short for ReaD random number SEED, an instruction that
operates more slowly and relies on more thermal noise than RDRAND.
It’s designed for cases where you want to use a software random number
generator but would like to initialise it with what’s known as a
“seed” to kickstart its randomness or entropy. An attacker who knows
your software random generator seed could reconstruct the entire
sequence, which might enable or at least greatly assist future
cryptographic cracking.

EGETKEY. This stands for Enclave GET encryption KEY. Enclave means
it’s part of Intel’s much vaunted SGX set of instructions which are
supposed to provide a sealed-off block of memory that even the
operating system kernel can’t look inside. This means an SGX enclave
acts as a sort of tamper-proof security module like the specialised
chips used in smart cards or mobile phones for storing lock codes and
other secrets. In theory, only software that is already running in the
enclave can read data stored inside it, and can’t write that data
outside it, so that encryption keys generated inside the enclave can’t
escape – neither by accident nor by design. An attacker who could make
inferences about random cryptographic keys inside an enclave of yours
could end up with access to secret data that even you aren’t supposed
to be able to read out!

Intel patches chip flaw that could leak your cryptographic secrets
12 JUN 2020


"Vulnerability

Previous: Facebook paid for a 0-day to help FBI unmask child predator
by Paul Ducklin
This week, Intel patched a CPU security bug that hasn’t attracted a
funky name, even though the bug itself is admittedly pretty funky.

Known as CVE-2020-0543 for short, or Special Register Buffer Data
Sampling in its full title, it serves as one more reminder that as we
expect processor makers to produce ever-faster chips that can churn
through ever more code and data in ever less time…

…we sometimes pay a cybersecurity price, at least in theoretical terms.

If you’re a regular Naked Security reader, you’re probably familiar
with the term speculative execution, which refers to the fact that
modern CPUs often race ahead of themselves by performing internal
calculations, or partial calculations, that might nevertheless turn
out to be redundant.

The idea isn’t as weird as it sounds because modern chips typically
break down operations, that look to the programmer like one machine
code instruction, into numerous subinstructions, and they can work on
many of these so-called microarchitectural operations on multiple CPU
cores at the same time.

If, for example, your program is reading through an array of data to
perform a complex calculation based on all the values in it, the
processor needs to make sure that you don’t read past the end of your
memory buffer, because that could allow someone else’s private data to
leak into your computation.

In theory, the CPU should freeze your program every time you peek at
the next byte in the array, perform a security check that you are
authorised to see it, and only then allow your program to proceed.

But every time there’s a delay in finishing the security check, all
the microarchitectural calculation units that your program would
otherwise have been using to keep the computation flying along would
be sitting idle – even though the outcome of their calculations would
not be visible outside the chip core.

Speculative execution says, amongst other things, “Let’s allow
internal calculations to carry on ahead of the security checks, on the
grounds that if the checks ultimately pass, we’re ahead in the race
and can release the final output quickly.”

The theory is that if the checks fail, the chip can just discard the
internal data that it now knows is tainted by insecurity, so there’s a
possible performance boost without a security risk given that the
security checks will ultimately prevent secret data being disclosed
anyway.

The vast majority of code that churns through arrays doesn’t read off
the end of its allotted memory, so the typical performance boost is
huge, and there doesn’t seem to be a downside.

Except for the inconvenient fact that the tainted data sometimes
leaves behind ghostly echoes of its presence that are detectable
outside the chip, even though the data itself was never officially
emitted as the output of a machine code instruction.

Notably, memory addresses that have been accessed recently typically
end up cached inside the chip, to speed up access in case they’re
needed again soon, because that improves performance a lot. Therefore
the speed with which memory locations can be accessed generally gives
away information about how recently they were peeked at – and thus
what memory address values were used – even if that “peeking” was
speculative and was retrospectively cancelled internally for security
reasons.

Discernible traces
Unfortunately, any security shortcuts taken inside the core of the
chip may inadvertently leave discernible traces that could allow
untrusted software to make later inferences about some of that data.

Even if all an attacker can do is guess, say, that the first and last
bits of your secret decryption key must be zero, or that the very last
cell in your spreadsheet has a value that is larger than 32,767 but
smaller than 1,048,576, there’s still a serious security risk there.

That risk is often compounded in cases like this because attackers may
be able to refine those guesses by making millions or billions of
inferences and greatly improving their reckoning over time.

Imagine, for instance, that your decryption key is rotated leftwards
by one bit every so often, and that the attacker gets to “re-infer”
the value of its first and last bits every time that rotation happens.

Given enough time and a sufficiently accurate series of inferences,
the attackers may gradually figure out more and more about your secret
key until they are well-placed enough to guess it successfully.

(If you recover 16 bits of a decryption key that was supposed to
withstand 10 years of concerted cracking, you can probably break it
216 or 65,536 times faster than before, which means you now only need
a few hours.)

What about CVE-2020-0543
In the case of the Special Register Buffer Data Sampling bug, or
CVE-2020-0543, the internal data that might accidentally leak out –
or, more precisely, be coaxed out – of the processor chip includes
recent output values from the following machine code instructions:

RDRAND. This instruction code is short for ReaD secure hardware RANDom
number. Ironically, RDRAND was designed to produce top-quality
hardware random numbers, based on the physics of electronic thermal
noise, which is generally regarded as impossible to model
realistically. This makes it a more trusted source of random data than
software-derived sources such as keystroke and mouse timing (which
doesn’t exist on servers), network latency (which depends on software
that itself follows pre-programmed patterns), and so on. If another
program running on the same CPU as yours can figure out or guess some
of the random numbers you’ve knitted into your recent cryptographic
calculations, they might get a handy head start at cracking your keys.
RDSEED. This is short for ReaD random number SEED, an instruction that
operates more slowly and relies on more thermal noise than RDRAND.
It’s designed for cases where you want to use a software random number
generator but would like to initialise it with what’s known as a
“seed” to kickstart its randomness or entropy. An attacker who knows
your software random generator seed could reconstruct the entire
sequence, which might enable or at least greatly assist future
cryptographic cracking.
EGETKEY. This stands for Enclave GET encryption KEY. Enclave means
it’s part of Intel’s much vaunted SGX set of instructions which are
supposed to provide a sealed-off block of memory that even the
operating system kernel can’t look inside. This means an SGX enclave
acts as a sort of tamper-proof security module like the specialised
chips used in smart cards or mobile phones for storing lock codes and
other secrets. In theory, only software that is already running in the
enclave can read data stored inside it, and can’t write that data
outside it, so that encryption keys generated inside the enclave can’t
escape – neither by accident nor by design. An attacker who could make
inferences about random cryptographic keys inside an enclave of yours
could end up with access to secret data that even you aren’t supposed
to be able to read out!
How bad is this?
The good news is that guessing someone else’s most recent RDRAND
values doesn’t automatically and instantly give you the power to
decrypt all their files and network traffic.

The bad news, as Intel itself admits:

RDRAND and RDSEED may be used in methods that rely on the data
returned being kept secret from potentially malicious actors on other
physical cores. For example, random numbers from RDRAND or RDSEED may
be used as the basis for a session encryption key. If these values are
leaked, an adversary potentially may be able to derive the encryption
key.

And researchers at the Vrije University Amsterdam and ETH Zurich
have published a paper called CROSSTALK: Speculative data leaks across
cores are real (they did come up with a funky name!) which explains
how the CVE-2020-0543 flaw could be exploited, concluding that:

The cryptographically-secure RDRAND and RDSEED instructions turn out
to leak their output to attackers […] on many Intel CPUs, and we have
demonstrated that this is a realistic attack. We have also seen that
[…] it is almost trivial to apply these attacks to break code running
in Intel’s secure SGX enclaves.

What to do?
Intel has released a series of microcode updates for affected chips
that dial back speed in favour of security to mitigate these
“CROSSTALK” attacks.

Simply put, secret data generated inside the chip as part of the
random generator circuitry will be aggressively purged after use so it
doesn’t leave behind those “ghostly echoes” that might be detected
thanks to speculative execution.

Also, access to the random data generated for RDRAND and RDSEED (and
consumed by EGETKEY) will be more strictly regulated so that the
random numbers generated for multiple programs running in parallel
will only be made available in the order that those programs made
their requests.

That may reduce performance slightly – every program wanting RDRAND
numbers will have to wait its turn instead of going in parallel – but
ensures that the internal “secret data” used to generate process X’s
random numbers will have been purged from the chip before process X+1
gets a look in.

Where to get your microcode updates depends on your computer and your
operating system.

Linux distros will typically bundle and distribute the fixes as part
of a kernel update (mine turned up yesterday, for example); for other
operating systems you may need to download a BIOS update from the
vendor of your computer or its motherboard – so please consult your
computer maker for advice.

(Intel says that, “in general, Intel Core-family […] and Intel Xeon E3
processors […] may be affected”, and has published a list of at-risk
processor chips if you happen to know which chip is in your computer.)"

Friday, April 24, 2020

HPC Render Layer priorities - Science;Research:Cinematic:Movie:Design & Gaming

HPC - Render Layer priorities - Science;Research:Cinematic:Movie:Design & Gaming

A List sorted for functional use in Programming:Science:Gaming:RS

https://science.n-helix.com/2020/04/render.html

The truth is the image & 8/7/6/5/4/3D Holograph/Sound/Wave/Field/Polygon standards:

(Standard Physics Model) for processing data (scans,XRay's,Ultra sound,Sonar: for example)
Are basic standards Implemented by the GPU Processor format support,
Vulkan,OpenGL:ES,DirectX,Mainframe,PC,Mac,Phone & Console

Logic necessitates maths: 
Float & Integer : Dynamic,Precision,High precision,Compressed,Half & Lossless compression optimised bit range & masking(for example XOR & AVX,Vector)

Feature sets of the Standard CL_C++ divide,

Simple instruction parallel SiMD,AVX,Vector
from Float & Integer FPU full instruction set

However the supported standard algebra objects : A>Z : Sign standards like planks,
Are to be defined first.

Function call priority:

1: Memory defined maths objects : MDMO : Float,Integer & static

2: Functions by class : SiMD(Vector), Float instructions FPU & Integer instructions

3: Having defined these, We call on Vulkan,OpenGL,Metal,DirectX for supported object classes:
Images,Audio,Polygon's & other objects defined by GPU standards (compression standards for example).

Minimising calls : Parallel Identity instructions: OpenCL,Direct Compute & at the same time,
Call OpenGL,Vulkan,Metal,DirectX,Console code,

The initiated GPU standards called : OpenGL,Vulkan,Metal,DirectX,Console code..
To aquire format standards supported, Do not mean that we have to use the standard on CL_C++/C#,
Acquiring the supported definitions simply means : Usable & Load/Save available.

Order as defined below: Display the HPC Science requirements for Research,Render,CAD,Cinema,Movies,Gaming

Dual execution, Single or multiple source RAM objects; Initiated though resource allocation & management.

AMD,IBM,NVidia,Intel,Sony,Microsoft,Linux,Apple: Follow the model: Dynamic Managed execution & timing assessment with pre-fetch anticipatory cache: L1,L2,L3

(c)Rupert S

>>

A List sorted for functional use in Programming:Science:Gaming:RS

https://science.n-helix.com/2019/06/kernel.html } Bios : compute : HPC
https://science.n-helix.com/2018/09/hpc-pack-install-guide.html } Without HPC software & this stack nothing works BIG

Over arch API standard SDK core code:
https://www.khronos.org/openkode/

User Interaction:
https://www.khronos.org/streaminput

Display & windows:
https://www.khronos.org/openwf

Media protocols & data collection + camera & video:
Audio, Video & Media encoding standards and hardware, Data collection, Process & Save
https://www.khronos.org/openkcam
https://www.khronos.org/openmaxdl

https://www.khronos.org/sycl   }
https://www.khronos.org/opencl } 3x load:code research data exploitation:Render & save+optimise+compress

https://www.khronos.org/collada/ } Dynamic data sets of precise 4/3D assets for study.
https://www.khronos.org/nnef }
https://www.khronos.org/gltf } : 2x for input & output render & data

3D data & photo input standards & compression, Data Sets

https://www.khronos.org/anari

Dispite the priority of high accuracy, Particular to research & CAD conception,

The priority of introduction to Cinematic render makes Gaming a priority,

Particularly in light of RayTrace & WebRender : WebGL, OpenGL:ES & Virtual Systems & VM

https://www.khronos.org/spir - Priority pixel & Vector & Ray-trace 8




https://www.khronos.org/opencl } 3x load:code research data exploitation:Render & save+optimise+compress

https://www.khronos.org/openxr } Extrapolation of rendered data in ML,AI & Analytics

High priority data exploration & utilisation & save:

Data base       }
https://www.khronos.org/collada/ } Dynamic data sets of precise 4/3D assets for study.
https://www.khronos.org/gltf } : 2x for input & output render & data

3D data & photo input standards & compression, Data Sets

https://www.khronos.org/opencl } 3x load:code research data exploitation:Render & save+optimise+compress

ANARI

Analytic Rendering Interface for Data Visualisation

Launched in November 2019, the Khronos Analytic Exploratory Group is now ANARI™, an official Working Group under Khronos governance. This new Analytic Rendering Interface API will streamline data visualisation development for any company creating scientific visualisation rendering engines, libraries and applications. ANARI will free visualisation domain experts and software developers from non-trivial rendering details while enabling graphics experts to avoid domain-specific functionality and optimisations in their rendering backends.

OpenMAX AL (Application Layer)

OpenMAX AL provides a standardized interface between an application and multimedia middleware, where multimedia middleware provides the services needed to perform expected API functionality. OpenMAX AL provides application portability with regards to the multimedia interface.

OpenMAX IL (Integration Layer)

OpenMAX IL serves as a low-level interface for audio, video, and imaging codecs used in embedded and/or mobile devices. It gives applications and media frameworks the ability to interface with multimedia codecs and supporting components (i.e., sources and sinks) in a unified manner. The codecs themselves may be any combination of hardware or software and are completely transparent to the user. 
Without a standardized interface of this nature, codec vendors must write to proprietary or closed interfaces to integrate into mobile devices. The principal goal of the IL is to give codecs a degree of system abstraction using a specialized arsenal of features, honed to combat the problem of portability among many vastly different media systems.

OpenMAX DL (Development Layer)

OpenMAX DL defines an API which contains a comprehensive set of audio, video and imaging functions that can be implemented and optimized on new processors by silicon vendors and then used by codec vendors to code a wide range of codec functionality. It includes audio signal processing functions such as FFTs and filters, imaging processing primitives such as color space conversion and video processing primitives to enable the optimized implementation of codecs such as MPEG-4, H.264, MP3, AAC and JPEG. OpenMAX supports acceleration concurrency via both iDL, which uses OpenMAX IL constructs, and aDL which adds asynchronous interfaces to the OpenMAX DL API.

glTF (Image format for 3D objects & subject work)

WebGL has Faster Load Times with glTF’s GLB Format versus Json :

The glTF format GLB loads and saves & loads Vertice vectors at upto 4x the speed,The model is included in the Vulkan Render Layers Initiative & enables computes of 3D/4D/<>8D Models that we need for gaming & research,
Quantum Code can also use the model as it saves considerable time making lattice models..
Suggested standard practice for space & clock cycle; High performance gaming saving gaming.

Tessellation will be faster & Vector Vertice processing faster with a lighter memory footprint.


Rupert S https://science.n-helix.com

Sunday, April 12, 2020

CERN-Filter & Masks

Centred Energy Reactive carboN : CERN-Filter , Masks & Clothing Material

Patents : (c)RS

(Save the health and shop community and health workers in NGO such as https://www.msf.fr)
(Donations appreciated!)

(The best formula we believe in at CERN & Boinc & WCG)

Packs of 10 to 100 of masks in this design free distribution too all customers with a subscription of 1 year & or a product of prestige from your play store..

(By recognition your sympathies and donations of contribution to the science community are most appreciated.)

Masks & Filters, Industrial Filters & protective materials : RS

Created in layers of fine mesh filer carbon between layers of water/energy absorbent/retentive/expulsive silicon gel fibre cloth,

at 1/3mm to 4mm thick the face masks & machine filters are intended to reduce and balance atmosphere in situation where: Water, Chemical, Solvents, Dust & germs/Bio-matter are present & a cause of problems,

The solution is created by the LHC team

(c)RS

at 1/2mm to 20mm C.E.R.N Reactive Materia is designed to shield bodies from many threats,
A layer of refraction though light materials on top & polarising light fields & OLED mesh..
The creation of Synergy System Dynamic Logic.

O.L.E.D Formula for display technology : (c)RS

O.L.E.D Display technology : Advanced Harden HDR LED:
Layer of bouncy silicone fibre vine mixed with very fine carbon fibres,
Layer of hardened transparent material, GEM
Layer of detective material (very fine) & O.L.E.D
Layers of bouncy silicon plastic (elastic) and very fine carbon fibre..
piercing conductive surface circuit..

Cooling layer Metallic radiator & electrostatic-VRBase 4 cone speaker (very thin)
Layer of cooling material for strengthening display.
Components.
Layer cooling & protective.

(c) Rupert S

https://lhcathome.cern.ch

https://home.cern/

https://boinc.n-helix.com/dl/boinc_7.16.5_windows_x86_64.exe

https://boinc.n-helix.com/dl/boinc_7.16.6_macOSX_x86_64.zip

****

Medicine : Ventilator : CERN are developing revolutionary HEV : High Energy Ventilator(tm)


You might want to ventilate asthma & burn patients as-well as other patients with ventilation issues such as astronauts & divers,
Serious topics such as ventilation & filtering in nuclear radiation zones & biological hazards.
These features are proposed to be included & researched.

https://is.gd/ProcessorLasso

A team of experts at the European Organisation for Nuclear Research (CERN) in Switzerland, the operator of the largest particle physics lab in the world, is developing a stripped-down medical ventilator for patients suffering with COVID-19.

Known as the High Energy Ventilator, or HEV, the device could be used to help treat patients with mild forms of the disease, or those who are in the recovery phase, freeing up more advanced machines for more severe cases. The design was proposed by a team of physicists from the LHCb (Large Hadron Collider beauty) experiment and has been designed with "ease of deployment in mind."

The device is based on components that are simple and inexpensive to source, and can be powered with batteries, solar panels, or emergency generators, making it easy to deploy in areas with limited resources.

https://www.newsweek.com/cern-stripped-down-ventilator-covid-19-patients-batteries-weeks-1497025

https://science.n-helix.com/2020/01/coronavirus.html

http://science.n-helix.com/2015/07/sacrifice-and-nobility.html

http://science.n-helix.com/2018/09/hpc-pack-install-guide.html

http://science.n-helix.com/2017/04/boinc.html

http://science.n-helix.com/2020/01/float-hlsl-spir-v-compiler-role.html

http://science.n-helix.com/2019/06/vulkan-stack.html