|
|
Oooh my hero. I'd have figured it out myself except transistors are still half wizardry to me.
I'm not great at this.
Real programmers use butterflies
|
|
|
|
|
One is glad to be of service
Mircea
|
|
|
|
|
Just looked again at the schematic. Put a 10k resistor in series on the input (the signal marked SET/RESET) otherwise you risk frying the transistor.
Mircea
|
|
|
|
|
Pffft, transistors, real witches use relays.
|
|
|
|
|
|
Real witches use FETs - Familiar Effect Transistors.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
It's like plumbing; no matter what you have it's not the right thing!
|
|
|
|
|
|
Neat!. I learned to code on a 65C816 running usually in 6502 compat mode.
I learned assembly on it when I was like 9, so i'll take a look. =)
Real programmers use butterflies
|
|
|
|
|
honey the codewitch wrote: What are the odds? That depends.
If you just bought billions of non-usable PNP transistors of Amazon, or billions of usable PNP transistors of Alibaba, I'd say the odds are about 100%
|
|
|
|
|
OK - another solution, even if a bit late for the party.
Since you were clearly online as typed your post I'd suggest just borrowing one from you system's CPU. I mean I'm sure there are at least 1.73 scillions of them in there and it's unlike to miss a few.
Just put it back when you're done with it.
Ravings en masse^ |
---|
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "If you are searching for perfection in others, then you seek disappointment. If you seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
|
|
|
|
|
Has anyone seen this?
Neutralinojs vs Electron vs Nw.js - see the results[^]
".45 ACP - because shooting twice is just silly" - JSOP, 2010 ----- You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010 ----- When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013
|
|
|
|
|
Isn't it a tiny uncharged part of an Italian?
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
OriginalGriff wrote: uncharged part of an Italian
It's entirely theoretical; Italians are fully-charged at all times.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
I know less about that stuff than the back end of a donkey, but the numbers look impressive. I'm shocked that anyone doing this stuff cares about performance. Sometimes I think the worst thing that happened to our industry was the availability of MIPS, GBs, and TBs out the yin-yang.
|
|
|
|
|
I started programming when performance was highly valued, and resources were scarce. I still try to code as if both of those things were still in play.
".45 ACP - because shooting twice is just silly" - JSOP, 2010 ----- You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010 ----- When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013
|
|
|
|
|
"I didn't mention the bats - he'd see them soon enough" - Hunter S Thompson - RIP
|
|
|
|
|
Quote: I still try to code as if both of those things were still in play.
I believe they are more important than ever.
Strangely enough some (many?, most?) cannot get the correlation between poorly performing software and the cost of deploying said software to the cloud.
In my guesstimate 99% of the software that gets pushed at unsuspecting customers, consume more than 100 x (not %) the resources it should have done in an even remotely sane world.
Here is a silly python snippet, its only purpose is to burn CPU time:
from datetime import datetime
def AddUp(x):
if x > 0:
return AddUp(x - 1) + x
else:
return 0
def CallAddUp():
result = 0
for x in range(1000000):
result = result + AddUp(512)
return result
started = datetime.now()
result = CallAddUp()
finished = datetime.now()
duration = finished - started
seconds = duration.seconds + duration.microseconds/1E6
print("CallAddUp() returned ", result, " in ", seconds, " seconds")
Run the above, and you get:
CallAddUp() returned 131328000000 in 77.055948 seconds
Doing the same thing in C#
using System;
using System.Diagnostics;
namespace ScriptExample001a
{
class Program
{
static long AddUp(long x)
{
if (x > 0)
{
return x + AddUp(x - 1);
}
else
{
return 0;
}
}
static long CallAddUp()
{
long result = 0;
for (long i = 0; i < 1000000; ++i)
{
result += AddUp(1000);
}
return result;
}
static void Main(string[] args)
{
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
var resultValue = CallAddUp();
stopwatch.Stop();
var duration = stopwatch.Elapsed.TotalSeconds;
Console.Out.WriteLine("C# CallAddUp( ) returned {0} in {1} seconds", resultValue, duration);
}
}
}
Run the above, and you get
C# CallAddUp( ) returned 131328000000 in 1,2486408 seconds
And then in C++:
uint64_t AddUp( uint64_t x )
{
if ( x > 0 )
{
return AddUp( x - 1 ) + x;
}
else
{
return 0;
}
}
uint64_t CallAddUp( )
{
uint64_t result = 0;
for ( size_t i = 0; i < 1000000; ++i )
{
result += AddUp( 512 );
}
return result;
}
int main()
{
start = std::clock( );
resultValue = CallAddUp( );
duration = ( std::clock( ) - start ) / (double)CLOCKS_PER_SEC;
printf( "C++ CallAddUp( ) returned %llu in %f seconds\n", resultValue, duration );
}
Run the above, and you get:
C++ CallAddUp( ) returned 131328000000 in 0.000000 seconds
Hence, many (most?) computer scientist have embraced python as their language of choice for crunching numbers …
Never mind that they often store data in text files, and then push them around between a plethora of web-services, …, etc.
Espen Harlinn
Senior Architect - Ulriken Consulting AS
The competent programmer is fully aware of the strictly limited size of his own skull; therefore he approaches the programming task in full humility, and among other things he avoids clever tricks like the plague.Edsger W.Dijkstra
|
|
|
|
|
Espen Harlinn wrote: C++ CallAddUp( ) returned 131328000000 in 0.000000 seconds No time at all? Even with all the speed, you are still adding one million times, so I would expect at least 200 or 300 us elapsed in the whole execution.
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
Quote: No time at all?
Exactly
Nearly all of it gets computed at compile time
compiling with: /O2 /Ob2 /Oi /GS- /arch:AVX2
Espen Harlinn
Senior Architect - Ulriken Consulting AS
The competent programmer is fully aware of the strictly limited size of his own skull; therefore he approaches the programming task in full humility, and among other things he avoids clever tricks like the plague.Edsger W.Dijkstra
|
|
|
|
|
Espen Harlinn wrote: Nearly all of it gets computed at compile time Wasn't there a very similar issue with the first version of the classic Dhrystone benchmark? Some machines (we had a bunch at that time, not just Intel and ARM) delivered extremely good benchmark results, far better than their applications performance would suggest. It was traced down to very clever compilers, moving things out of inner loops. I am not sure if it was moving out of loops to be executed once instead of umpteen times, or it was done at compile time, with zero run time executions.
As soon as this was discovered, the Dhrystone benchmark was updated to Version 2.0, with modifications that made it impossible or difficult to move things out of the loop.
|
|
|
|
|
Quote: Wasn't there a very similar issue with the first version of the classic Dhrystone benchmark?
I really don't know …
If the compiler can perform calculations up front, then that is a Good thing™; because, obviously, then the software doesn't have to do those calculations at runtime. C/C++, go, rust, and compilers for many other programming languages can do this pretty well.
Armadillo takes this pretty far, and is a good example when it comes to demonstrating the performance gains that can be achieved.
The point I am trying to make is that most of the CPU time, and other resources, used by most applications are wasted on things that has little, or nothing, to do with what the solution is actually supposed to be doing. The Python vs. C# vs. C++ is just one example of how choosing the wrong tool for the job can cost you. In my head Python is for configuring and orchestrating solutions implemented in native code.
If you just hate C++ and love scripting languages, use a decent one like Julia[^], it will execute something similar to the snippets I wrote, in about 0.77 seconds.
Another low-hanging fruit is to stop splitting solutions into a set of microservices. It is extraordinary rare that a system consisting of a bunch of microservices can compete with the performance of a monolith, and 99% of the software implemented as microservices does not process enough data to justify the horizontal scalability argument. Not only do people put network interfaces between what should have been implemented as libraries, they insist on sending the data as inefficiently as possible too (JSON/XML). If you really need to do this: use tools like FlatBuffers: FlatBuffers, Protocol Buffers, or Bond.
With libraries like Imgui you can create nice web based user interfaces, just take a look at: Dear ImGui JavaScript+WebGL example.
And, again, if you for some reason would like to use a language different from C++, there is a rust library that can be used to develop web applications: egui – An experimental immediate mode GUI written in Rust.
Once you have started to use web-assembly, it only makes sense to use web-sockets too, and then JSON is probably not your best choice for serializing data…
Better performance -> lower runtime costs
Espen Harlinn
Senior Architect - Ulriken Consulting AS
The competent programmer is fully aware of the strictly limited size of his own skull; therefore he approaches the programming task in full humility, and among other things he avoids clever tricks like the plague.Edsger W.Dijkstra
|
|
|
|
|
Espen Harlinn wrote: If the compiler can perform calculations up front, then that is a Good thing™; because, obviously, then the software doesn't have to do those calculations at runtime. C/C++, go, rust, and compilers for many other programming languages can do this pretty well.
True, as far as it goes, but the idea of Dhrystone was to see how a compiler handles code that cannot be optimized at compile time. Most code cannot be optimized away at compile time, so Dhrystone 1.0 fails as an evaluation of the compiler's optimization abilities.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
Espen Harlinn wrote: If the compiler can perform calculations up front, then that is a Good thing™ Certainly. All optimization (that produces identical results) is a good thing.
But the purpose of timing/benchmarking is either to get an estimate of how fast one specific application, generated in one specific way, will run on one specific hardware configuration, or you want an estimate of how fast the application in theory might run on some given hardware.
In the first case, it makes little sense to benchmark some other application, maybe written in some other programming language and/or generated with other tools. In principle, any compiler (for any language) might do all the same kind of optimizations as any other compiler (for any language), such as compile time evaluation. If one compiler has implemented this, and another one has not, you are really benchmarking the effect of that optimization, not a different application (where various optimization techniques may be applicable or not) nor one language against another.
In the second case, you want to benchmark capabilities of a given runtime platform, independent of which application you are running. That was the purpose of benchmarks like Whetstone and Dhrystone - testing hardware, not development platforms. The results should be as independent of development platform (/compiler) as possible.
You cold point to a third alternative: You want to benchmark a development platform. But those results really say nothing about neither your application code as such (which may be processed on different platforms) nor of the programming language (which may be provided on different platforms).
For the last 25+ years or so, I have considered optimization to not be my responsibility. I do not write code to suit one compiler rather than another one, one IDE rather than another one. I decide the data structures and algorithms. Then I expect any compiler of any IDE to e.g. compile time evaluate all that can be compile time evaluated. Not every compiler is perfect, or complete with respect to optimization, so maybe a given compiler leaves some calculations to run time. But then the solution, as I see it, is to improve that compiler - not to switch to another programming language.
|
|
|
|
|