|
The web side of things is always hyper inflated!
MVC + HTML + JavaScript is a basic reasonable requirement
jQuery, although use less and less is good to know, just spend 2 hours on it, you will get the idea.
and then an MVVM web API is nice. I prefer VueJS, but hey Knockout of Angular are other candidates. Angular is more like a framework, imposing a development style though.
For styling bootstrap help, and it only takes 2 hours to know all of it.
Then there are other things that are popular that I never used, so one could probably survive without it: nodejs, npm
|
|
|
|
|
Because the people advertising the jobs have no idea ... about pretty much anything.
|
|
|
|
|
Well, I believe things got complicated when we went from making applications that ran on windows, to writing Code that Generates Code that runs in a sandbox on various machines/devices to interact with the user.
Oh, did I mention that the "sandbox" is actually one of MANY sandboxes (IE, Chrome, FireFox, Safari, etc, etc, etc)... Which require libraries (jQuery) to make them look like one unified sandbox (ish).
I thought the purpose of .Net was to fix this? Or was that Java?
Oh look a new device to write for... Gotta go!
|
|
|
|
|
My favorite posting with dotNet as a requirement came about a month after Microsoft first released Visual Studio.NET as a follow on to VS 6. It wanted 3-5 years of dotNet experience, which would only have been possible if you had been on the Microsoft tools development team that wrote dotNet 1.0 and VS.
|
|
|
|
|
Stoney,
Since you mention job postings, I wonder if this is venting due to you having some issues with your job search? I am a lead developer/manager on the web applications team for a company, but these comments are all my own and not representative of the company I work for. I write the job postings for our openings and manage the entire interview process. I have a masters in Computer Science and have been professionally developing for about 10 years now, so I'm not an "HR" type. Despite that, I do include most of what you mentioned on my postings as well.
The reason being, and as mentioned further down the thread, these frameworks do not take long to learn but the mindset that you need to write good web code is completely different than desktop development, and is quite a bit different than mobile development as well. Even the differences between the Web Forms and MVC approach is considerable. jQuery, angular, web services (WebAPI or WCF) are all part of the microsoft stack now, as is Bootstrap. Even node.js to some extent. You may be able to find a job that just uses pure ASP.NET (prior to MVC), but if you are using the latest version of Microsoft's web tech stack, most of the above will be included.
All of these are just toolsets and means to an end. When I interview, a large component of our process is several real programming tasks. The MVC challenge is done offline before you come on-site which will give you the opportunity to learn the right way to do the task and implement it as such. It has components of everything I mentioned above. When you are on-site, we do a live coding exercise where you are given a problem description and have to go from a text document to a functioning application that meets all requirements in the span of an hour. This is a console application and doesn't include any web functionality. This gives us the opportunity to see how you think, how you deconstruct the problem, how fast you can implement it, how much and how effectively you use online resources such as Stack Overflow... It is surprising how enlightening this component is and makes me surprised that more companies don't do something similar. Remember, an interview is many-faceted. It is not strictly on your ability to develop software in a vaccuum but is on your ability to do the job interviewed for. In many cases in the software development world, this requires a growth vs fixed mindset, interpersonal communication skills, ability to gather and document requirements effectively, etc.
I don't so much care that you have used Angular before, or WebAPI, or whatever. I care that you can see the value in these technologies and are excited to learn them and use them. From your post, it seems like that's not you, and that could severely hamper you in an interview. I'm not going to pass on your resume because you didn't use any of these components at previous positions, but I'm going to expect that you've read about all of them and can speak on them when you are interviewing with me.
That's really what the job posting is for, a tip-off on our tech stack and a cheat sheet to prepare for the interview. We do not require that you have used every technology, but our language is similar to "Familiarity with all or some of the following" to tip you off that you should become familiar with these.
If you confuse Java with JavaScript, or think node.js is a client side framework, it will be pretty obvious that you don't care enough to prepare for a position where these technologies are used, and are comfortable in knowing what you know and not a whole lot else. Maybe this isn't you, I don't know you. I am just commenting in a general sense in the hope that it might help you see where we are coming from on the other side of the fence, and possibly some beliefs that could be holding you back on your search.
One parting comment I would make as well - these technologies are quite clearly the way of the future for Microsoft. .NET Core is similar in many ways to node. npm and nuget are essentially the same. the cross-platform nature and hosting of Kestrel+C# (outside of IIS) is just like node+javascript. In fact it could be argued that it's a better version since C# is a better language in many ways and Kestrel outperforms Node. Microsoft is focusing on container technologies moving forward with Windows Server 2016 and the Docker partnership. The purchase of Xamarin and open-sourcing of .NET are major signals that MS doesn't want .NET to be tied directly to Windows.
The landscape of our tech stack is experiencing a seismic shift right now and I for one am excited to see where it goes. I tend to hire people who have a similar mindset.
|
|
|
|
|
I don't recall who it was who ebulliated over the release of a boxed set of Person of Interest, but it inspired me to send off for a set (and then spend an eternity ripping the discs to add Chinese subtitles), but it was a good call.
Impressed am I the show with, thus far, so Cheers!
I wanna be a eunuchs developer! Pass me a bread knife!
|
|
|
|
|
POI was a good show; enjoyed the character development, and, being in a tech environment, can see how things COULD progress.
|
|
|
|
|
From what I've seen so far, it's extremely well thought out, so much so that it could even fit under my Real Futures(TM) umbrella, and the characters are brilliantly written, in that they actually think and behave as more than the usual cardboard cut-outs that US TV is riddled with.
It's nice to see good people (which 99% of people are) trying to do something good with their lives (which 99% of people would do, if they weren't so busy just trying to survive).
Lots of kudoses to the writers*.
* Of the first season, at least.
I wanna be a eunuchs developer! Pass me a bread knife!
|
|
|
|
|
We've watched the entire series via Netflix; both my wife (non-techy) and I (techy) enjoyed it.
|
|
|
|
|
Hi All,
First post here. Hope this is a suitable subject. You've no doubt all noticed that AI is the flavour of the month on numerous web sites and media outlets but I'm wondering (based on my current personal experience discussed below) if it truly is as far advanced as the hype would have us believe.
Recently I've been working on a personal project to attempt to summarize into a very simple form the remarks of stock analysts on specific stocks on a Canadian TV show. Amongst other things, I want to identify the stock & figure out if the remarks were +ve, neutral or -ve (i.e. their "sentiment"). I've been using NLP tools from Stanford & Microsoft Cognitive Services (MSC) as part of a C# .Net program. Using the Stanford sentiment API, I have trained a model on a set of about 900 plus sentences & about 3000 business word rated as + or - by other. After about 4 iterations I have about 90% accuracy on the training set & near 80% on a reserved set of another 150 sentences. Seems OK but, for example, "Stock's Up 180% This Year" & "The Stocks Are Doing Well" are still rated as neutral and "I'm Very Cautious On The Utility" is rated as positive! I haven't spent as much effort on the MSC sentiment tool but it did not do too well on a limited test. For example, "It's Underperforming The Market" gave a score of 0.82 which is very positive in the Stanford rating system. Overall, there does not seem to be that much "intelligence" at work in these classifiers despite words such as "deep learning" being mentioned.
Sorry about the length of post required to provide context. Any thoughts on this topic appreciated. Am I expecting too much or am I missing something vital that take things to another level? Regards, RB
|
|
|
|
|
Nice to know. Maybe you write someday a series of in depth article about it.
But I think the AI isnt now so well developed. Even on 90% means that every 10th information gets wrong interpreted. Nice for "small talk" but not suitable for "Rocket science"
Press F1 for help or google it.
Greetings from Germany
|
|
|
|
|
80% is about expected, you can get a bit more with more data.
There is no other level (yet).
|
|
|
|
|
Member 10415611 wrote: Amongst other things, I want to identify the stock & figure out if the remarks were +ve, neutral or -ve (i.e. their "sentiment"). If the remarks are made by a human, you'll need a human to interpret it. A computer will not recognize sarcasm, hypotetical hypno hypothetical situations, or personal biases. You could just as well count all the times the word "buy" appears.
Also, does the algorithm "know" if the article says "updated: 1900h, confirmed hoax"?
Member 10415611 wrote: on a reserved set of another 150 sentences Yes, but humans will generate unexpected sentences with weird opinions, and words that are "just invented" and "cool".
Member 10415611 wrote: "The Stocks Are Doing Well" are still rated as neutral Which in my head is neutral, as it is merely a statement about the current situation. In itself, the statement cannot be said to be positive nor negative, even from a traders' perspective.
It would be great if you could expand the sentences to train toward todays headlines. The more variations the AI sees, the better it becomes. In theory at least, I'm not giving any guarantees.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
Eddy,
Thanks for your comments. What you say is definitely true regarding sarcasm, hypothetical situations, biases, etc. Similarly, there are many unexpected sentences, weird opinions and invented words. My data set is from about 10 different speakers with sentences randomly selected from the segments where they are specifically discussing a particular stock. Even though this is a fairly narrow domain it's amazing how many ways people come up with of saying the same thing. Also, they very seldom say "buy", "sell" or "hold" even if directly asked for their recommendation. They will usually say about 5 or 6 sentences which I, as a human, can interpret as a veiled buy/sell/hold recommendation. I am not sure if training using a more general source would help. The original "model" provided with the Stanford NLP API is based on film reviews & it gave only about 52% accuracy on my original training set. It's training data contains a lot more general phrases as well as film related ones. I might try combining my set & theirs for an experimental training run & see if that helps. Ultimately, however, I think that what's needed is another "layer" of intelligence that actually puts things together to "understand" the sentences rather than just applying a kind of matrix of +ve/-ve scores for words and phrases in the sentence. The "sentiment" analysis is only a part of my code, another part is "rule based" from looking at keywords and the structure of questions/answers. I hope by combining these two things that I can get a bit further.
|
|
|
|
|
Member 10415611 wrote: They will usually say about 5 or 6 sentences which I, as a human, can interpret as a veiled buy/sell/hold recommendation. That's your basic human, not willing to commit to anything and giving vague descriptions instead of a simple "42" with complete specs.
Member 10415611 wrote: I think that what's needed is another "layer" of intelligence that actually puts things together to "understand" the sentences "Understanding" would be a holy grail like achievement.
Perhaps you don't need complete understanding of the language - if you can identify the sentiment more correctly than simple statistics can, then you'd have an advantage over those who can not. And perhaps it would be helpful to combine those ideas, since even sarcasm follows a pattern* that humans must be able to recognize.
*) in a single language the syntax should be predictable
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
in a single language the syntax should be predictable
the syntax in a single language should be predictable
the syntax should be predictable in a single language
predictable the syntax in a single language should be
predictable in a single language the syntax should be
predictable in a single language should be the syntax
predictable in a single language should the syntax be
I'm sure there are more, but now I'm bored.
|
|
|
|
|
AI can't understand elephant.
|
|
|
|
|
..and a good example too.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
Are you using backprop or RNT?
I wanna be a eunuchs developer! Pass me a bread knife!
|
|
|
|
|
Mark,
The Stanford system uses a "Recursive Neural Tensor Network" to train its sentiment model. The sentences are first parsed and processed into a set of "binary trees" with sentiment scores (0, 1, 2 for -ve, neutral & +ve in my case) attached to each word and phrase. I'm afraid I'm not an expert in the theory of NLP so not sure how that fits into "backprop or RNT." I've learnt a bit about the overall field of NLP & a fair bit about the Stanford approach with the hope of creating my application without making some dumb error but that's as far as it goes.
|
|
|
|
|
Hmm. If they want predictions, they should go with backpropagation, rather than RNT(N). RNT isn't quite as "intelligent" (which, partly, means that you can understand its decisions, because it's more instance-tree population, ergo a percentage player, than a genuinely "intelligent" solution).
Backprop seems ideally suited to the problem you're working on, but I suppose it's not someone's "pet concept" at the moment. That's universities, for you.
I wanna be a eunuchs developer! Pass me a bread knife!
|
|
|
|
|
Interesting. I'll have to look into backprop.
|
|
|
|
|
A friend of mine is a computational linguist. He says that at best it's gotten to about 85%, but that's with well structured source. What surprises me is that even with highly specific data, getting it better gets very complicated (though even in the highly specific stuff he recently worked on, 50% accuracy saves so much time that even that level of accuracy is worth it.)
|
|
|
|
|
Joe,
Interesting. What do you mean when you say:
Joe Woodbury wrote: 50% accuracy saves so much time that even that level of accuracy is worth it.)
|
|
|
|
|
If you're using natural language processing to assist in some task which is completed by humans, getting 50% completely right could save those humans a tremendous amount of time. Further, getting 85% accuracy with, say, even a 10% error rate may actually cause the humans to take even more time than if they hadn't used the computer program in the first place.
|
|
|
|
|