|
This code is like a surgery made by Frank Barnes in Mash 4077
|
|
|
|
|
In Kotlin you can do this:
var items = (1..300)
Additionally you can call the shuffle() method on the range to randomly shuffle the ints.
Then you can call last() on the returned shuffled ints to get the random last item.
So if you want a list of 10 random values in the range of 1 - 1000, you can do it with just a couple of lines of Kotlin. Each time through the loop the range is shuffled and a new last() is chosen.
for (i in 1..10) {
print((1..1000).shuffled().last())
print (" | ")
}
Output looks like the following:
795 | 948 | 719 | 304 | 733 | 849 | 723 | 66 | 316 | 619 |
Try it out in your browser at : Try Kotlin[^]
These new types of syntax are ugly to me, but I can see that they could grow on a dev.
*Note:By emerges, I just mean that newer languages have syntax which looks similar to this. Kotlin has had the Range type for a long time.
Basically a fluent interface[^] but just interesting that it becomes more of a core part of syntax in newish languages.
modified 15-Aug-19 16:49pm.
|
|
|
|
|
It's very Python or Ruby or whatever-esque. Personally, I really like that syntax, as it's so much easier to read. In fact, in C# I wrote an extension method so I can do:
5.ForEach(n=>DoSomethingWithN());
So technically, in C# with extension methods, I should be able to write:
10.ForEach(n=>1000.Shuffle().Last().ConsoleOut());
I like that syntax because it reads left-to-right and would be very much like the pipe operator in F#.
Marc
|
|
|
|
|
Good point about Left to Right reading. I mostly like it too and I’m sure it will grow on me.
I think the declaration/instantiation of the Range is a bit odd (1..10). No new operator or anything. but, it is quite streamlined.
|
|
|
|
|
Hmm, neat. I like it. Looks pretty similar to how I'd use ranges in Ruby:
(1..10).each do
print (1..1000).to_a.shuffle.last
print " | "
end
The only real difference is that I had to convert the range to an array, since arrays have a shuffle method and ranges don't.
Of course, being Ruby, I could just patch a shuffle method onto the Range class.
|
|
|
|
|
Oh, wow, that’s very interesting that it is that close to the Ruby version.
That’s one language I’ve stayed away from.
|
|
|
|
|
The subrange notation is about 50 years old, isn't it? In Pascal (1970), the parentheses are not needed, though.
Subranges are, of course, allowed on any discrete type. So if you have a TYPE month: (jan, feb, mar, apr, may, june, july, aug, sept, oct, nov, dec); - note that these are NOT integers, but a distinct value domain - you can declare a VAR SummerMonth: may..aug; and assigning a value to SummerMonth outside that range causes an exception. Maybe Kotlin provides a similar full-blown enum concept, including subranges.
(In Pascal, arrays can have any subrange index, e.g. Members: ARRAY [1970..2025] OF MemberList; - maybe that is possible in Kotlin as well, but it usually is not in C-derived languages.)
|
|
|
|
|
Member 7989122 wrote: Subranges are, of course, allowed on any discrete type.
That's something I got used to with Ada that I wish I had in C++ (or any of the other static typed languages I use) - even in languages like Kotlin or Rust, which have first class support for ranges, they're still a run-time entity, not a compile-time thing, so you can't constrain how a function works by a range in the same way... And you can't take sub-ranges of enums either
Oh, for a dependently typed language?
Java, Basic, who cares - it's all a bunch of tree-hugging hippy cr*p
|
|
|
|
|
Member 7989122 wrote: The subrange notation is about 50 years old, isn't it?
Yeah, it probably is quite old.
Member 7989122 wrote: but it usually is not in C-derived languages.
And almost every language I use is C-derived (C++, C#, Java, JavaScript) so that's why it just looks odd to me.
|
|
|
|
|
Reminds of Turbo Pascal, so not that new.
|
|
|
|
|
Yeah, I probably have forgotten about Pascal. Last time I actually used it was probably 1996 or so.
|
|
|
|
|
raddevus wrote: Each time through the loop the range is shuffled and a new last() is chosen.
Isn't that code going to be horribly inefficient? Surely the language must provide a better way to get a random number in a particular range?
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
Richard Deeming wrote: Isn't that code going to be horribly inefficient? Surely the language must provide a better way to get a random number in a particular range?
Oh yes, just laziness on my part for a quick example.
There may be other ways to accomplish the same thing that are more efficient.
|
|
|
|
|
This approach would not scale well if you want a random number between 1 and 2^31-1.
|
|
|
|
|
That was my immediate thought too. Glad to see someone else cares about wasting cycles.
"If you don't fail at least 90 percent of the time, you're not aiming high enough."
Alan Kay.
|
|
|
|
|
For basic loops I've wanted this type of syntax for a while, especially if I can replace the start and end items with variables.
|
|
|
|
|
Haskell ranges:
let items = [1..10]
|
|
|
|
|
Every software developer likes analysing memory leaks, don't we?
Last friday, I had to work on such a thing. After interacting with a different machine, our WPF application allocated some 100 MB per second, and there is no machine which will stand that for a reasonable time span.
The interaction with that other machine is via WCF. Since some hardware actions take time, I implemented it as a Task. Sometimes, the hardware failed and raised an exception. Our WCF implementation failed to cope with the faulted task. An UnobservedTaskException was raised instead (see also Unobserved TaskException[^] ).
The exception handler still assumed .Net 4 behavior: that the application is about to crash now. It logged the exception, and then showed a MessageBox with the exception details, but that MessageBox hid behind the main window of the application becoming invisible for the user.
And now the memory leak started...
Why? Well, the handler of the Unobserved TaskException runs in the Finalizer thread. The MessageBox is modal. I.e. the MessageBox blocks the thread of the Garbage Collection required for unmanaged memory cleanup...
Some things go terribly wrong.
Oh sanctissimi Wilhelmus, Theodorus, et Fredericus!
|
|
|
|
|
I've once managed to create a memory leak in Mono by creating an object in an event handler and using the object in another event handler. Worked fine in .NET, ran over on RPi in Mono. Granted, the code hasn't been clear to begin with.
|
|
|
|
|
Sorry, I don't remember what you just said...
enum HumanBool { Yes, No, Maybe, Perhaps, Probably, ProbablyNot, MostLikely, MostUnlikely, HellYes, HellNo, Wtf }
|
|
|
|
|
I once spent close to three months eliminating leaks from a C#/WPF application. There were basically two sources of leaks.
One were the false optimizations I had done as a matter of course due to my prior experience as a C++ programmer. I tended to cache resources, which caused them to leak event handlers and such. I also found that data bindings constructed in code rather than XAML leaked, and had to be cleared manually when you were done.
The second were the WPF flow document and page navigation mechanisms. Both of them leaked horribly - several megabytes per operation. I replaced them with HTML/WebBrowser (which leaks, but much less) and a home-grown navigation mechanism.
When I thought I had all of the leaks figured out, I ran the app over a four day weekend with a tiny test driver that navigated randomly through the UI every couple of seconds. When I came in, the app was still running and had peaked at 400MB, which wasn't bad considered it took up 275MB just starting up. All of the leaks were attributable to the WebBrowser control.
Many thanks to the folks at SciTech Software for .NET Memory Profiler.
Software Zen: delete this;
|
|
|
|
|
This may not address your issue, but I address memory leaks as a preventive process, and have done so going back to VB6 days. In short, I clean up my resources and objects before I allow an instance of a class to go away.
In C#, I almost ALWAYS use try-catch-finally. I declare an object as null before "try", instantiate it in the try code block, catch any exceptions (a whole other discussion), then clean up my objects in the finally block. If they have a Dispose or Clear method, I execute that then set the variable to null. No waiting on the GC or coding shortcuts like "using" that get compiled as try-catch-finally anyway.
If my class has an class-level objects, I use my Dispose template that also includes the finalizer.
Since taking this approach years ago (which is mostly copy and paste snippets), the little added effort has helped me to have zero memory leaks, and as an added bonus (that "catch" thing again), I get excellent debugging info.
Sometimes consistency in how we code eliminates a lot of problems later.
|
|
|
|
|
In C#, I almost ALWAYS use try-catch-finally.
That's the "Padre Nostro" of alldays.
|
|
|
|
|
(Cue Alec Guiness) I've been writing C and C++ for a long time... a long time. I thought the preprocessor was my bitch. I had some code that compiles only for debug that I wanted removed temporarily, so I did this:
#if defined(_DEBUG) && false
#endif Not only does the VS2008 (don't ask) compiler not like that [1>.\Document.cpp(1216) : fatal error C1017: invalid integer constant expression] , it's actually right! The #if expression is integral, not a boolean.
You have to do this:
#if defined(_DEBUG) && FALSE
#endif (in the Windows world anyway), or this
#if defined(_DEBUG) && 0
#endif to get the effect I wanted.
Software Zen: delete this;
|
|
|
|
|
Another quote by Alec Guinness: Quote: Who is more foolish? The fool or the fool that follows it?
|
|
|
|
|