|
My current solution is a custom table to manage sequences and a queued key-provider-service in the DAL. The table could probably be replaced with SQL-Server-Sequences (which the key-provider-service would then abuse without actually committing inserts) but I'm not sure what I would gain by doing that.
|
|
|
|
|
Pete O'Hanlon wrote: In order to preallocate values, you would need to be able to guarantee that the number you got from the DB/DAL was unique.
Statistical averaging with guids - the chance is significantly low.
Sequential guids might make that lower since they are time based, thus part of the guid is based on the current time now and thus that part will never be the same in the future.
|
|
|
|
|
It doesn't look like he wants to do this.
|
|
|
|
|
Hello all first time here
I am trying to increase my cyber sec knowledge by creating a small IDS. I was hoping someone could review the code and give me some feed back and maybe point me in the right direction. Currently I am needing intrusion sig's for filters.txt if anyone knows a database of some sort. I also am not too sure where to go next. My current thought is to just check for in/out bin/sh, if bin/sh were to come across the network tap then disconnect and block all future connection attemtps.
Please note that this is basically running Pseudo code.
I am well aware of the pythonic programming, for now I am just trying ideas
Any and all advice would be awesome
Thanks
import pcap,dpkt
import socket
import os
def capture():
dev= pcap.lookupdev()
for ts, pkt in pcap.pcap(name=dev, snaplen=65535, promisc=True, immediate=False):
eth = dpkt.ethernet.Ethernet(pkt)
if eth.type!=2048:
ip = eth.data
typepack = eth.type
try:
dst_ip_6= socket.inet_ntop(socket.AF_INET6, ip.dst )
except AttributeError:
continue
else:
ip = eth.data
tcp = ip.data
typepack = eth.type
try:
src_ip = socket.inet_ntoa(ip.src)
dst_ip = socket.inet_ntoa(ip.dst)
if dst_ip == '192.168.1.2':
with open('//usr//home//mrfree//Desktop//Scripts//ipLog.txt','a') as log:
log.write('Session:%s:%s,%s\n'%(src_ip,tcp.dport,ts))
print('Session:%s:%s,%s\n'%(src_ip,tcp.dport,ts))
if tcp.dport < 1028:
log.write('Out of bounds connection attempt, Blocking %s \n'%(src_ip))
print('Out of bounds connection attempt, Blocking %s \n'%(src_ip))
with open('//usr//home//mrfree//Desktop//Scripts//filters.txt','r') as filters:
filters = filters.read()
if filters in tcp.data:
log.write('Attempted Shell connection, Blocking %s \n'%(src_ip))
subprocess.call('pfctl -k {0}'.format(src_ip))
print('Attempted Shell connection, Blocking %s \n'%(src_ip))
except AttributeError,TypeError:
continue
if __name__ == "__main__":
capture()
|
|
|
|
|
A general comment that since this is a design/architecture forum a design/architecture might be a better starting point than a hunk of code to elicit comments.
|
|
|
|
|
Sorry this is my first time on this thread. I didn't see any other subsections that looked more appropriate for deigning/building an IPS. I assure you, I was merely trying to find general help in the Design and Architecture of a program that would watch over network and interior components of a FreeBSD operating system. Please understand this was just a miss-understanding for your website here is not so user friendly. Thanks
Mod:please delete thread
|
|
|
|
|
orphansec wrote: Sorry
It was a suggestion not a warning.
Myself I might comment on what your code should do if you explained what you want it to do. But I won't comment on that block of code mainly because I don't want to try to figure out what is that you think you are doing with it.
|
|
|
|
|
orphansec wrote: Any and all advice would be awesome The general rule here is that people will help you to identify and fix bugs in your code, when you post a detailed question. Code review is a much more time consuming activity, so very few people have the time or inclination to do it. Having looked at your code I cannot see anything that stands out as wrong, but then I don't really understand what its purpose is. It also helps if you avoid TLAs (such as IDS) and abbreviations (such as sigs). Remember the more information you provide the more chance someone will be able to help you.
|
|
|
|
|
Note: I am not asking specifically about the actual implementations of Model-View paradigms in ASP.NET, or WPF.
Here's one diagram of Model-View paradigms: [^] (the source is from a JavaScript centric article).
I am particularly interested in how you conceptualize which "components" (model, controller, view, viewmodel, etc.) do the "business" of managing sources of Data (server, cloud, web, local data-stores), possibly using an ORM, and how Views get their data and have data bound to Controls in View. And, if data must be "transformed" for use, which component is "responsible" for transformtion ... i.e, where is the transformation performed.
thanks, Bill
«I'm asked why doesn't C# implement feature X all the time. The answer's always the same: because no one ever designed, specified, implemented, tested, documented, shipped that feature. All six of those things are necessary to make a feature happen. They all cost huge amounts of time, effort and money.» Eric Lippert, Microsoft, 2009
|
|
|
|
|
I'm not quite sure if this is what you're asking for, but here I go.
I'm using MVP with WinForms (my experience with other flavors of MVxx isn't worth mentioning). My way of implementing it:
The Presenter is the only thing that knows about the other two. It subs to events from View and Model, calls methods on them and can read their properties. It contains the state of the View, the logic to manipulate it and synchronizes between Model and View - but it does no processing of the data whatsoever.
The Model can take different shapes: It can be a rather dumb data container, potentially already filled via constructor arguments. Or it can contain the code to pull data from the various sources and potentially transform it. Or it can be an adapter to a non-primitive business object (e.g. some kind of workflow). In the latter two cases it may hold the session of the ORM.
So either the Presenter "knows" that the data is already present on instantiation or it waits for some "data available"-event from the Model, which may be the result of a previous user request -> View fires event -> Presenter calls Model. The Presenter then hands the data to the View by calling methods on the View that take the data as arguments and either bind it or "just display" it (e.g. setting Label.Text = x).
Does this help?
- Sebastian
|
|
|
|
|
Thanks, Sebastian, for your very interesting response. Got my upvote
«I'm asked why doesn't C# implement feature X all the time. The answer's always the same: because no one ever designed, specified, implemented, tested, documented, shipped that feature. All six of those things are necessary to make a feature happen. They all cost huge amounts of time, effort and money.» Eric Lippert, Microsoft, 2009
|
|
|
|
|
I suspect we have a bastardised setup of MVVM on WPF but here is our structure.
WCF serving up observablecollections or int as the transport format. Database makes extensive use of stored procedures and Views, 90% of which are generated by an in house ORM.
The WCF also has the Models (the type for the observablecollections) which represent the Views of each table in the database. The model project is shared by the WCF and the client projects. Properties in the models implement INotifyPropertChanged. So other than the INPC the models have no functionality.
Client has a DataServices folder where the database tables are represented by a class that gets the collections from the WCF. The ViewModel is bound to the View and gets the data from DataServices classes and populates the Model collections. 98% of all the work is done by the ViewModel.
By sharing the Models project between the WCF and the client there is no translation required, this may be technically wrong but it works perfectly.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Hi Friends
I'm looking for some new and fresh ideas.
Using asp.net, jscript, webservices, Jquery.
My question is this, the best way to setting controls and display data to an specific profile or user group.
I already used switched off/on options in some controls like gridview for example, hidden or show a specific data colums and display an specific button. The logic code is in the code behind, looking for an specific profile for all this (user-profile-option).
I think than is not the best way to do this. By the way it's hard to change and complex
Other idea is't to separate in a few pages with the specific data and controls to be used for the specific profile, but I think that it's very dificult to edit, administrate, and lost the develop control.
Other idea it's a little extrem.
what about to setting wish data columns and wish control can or can not to use the especific profile in a sql table in the data base.
This sql table contains in a data columns the options in JavaScript to do some action specific.
This sql table contains the columns name that to be showed or not in the user interface.
Someone has any ideas ???
Greetins
|
|
|
|
|
First, I think the Application's hardware and user context are going to constrain any solution: it's one thing if you are talking about a distributed application where each user (client) has a "rich-client" communicating as needed with a server/web/cloud(s). Another thing where you have "thin clients" and most of the "work" is done on the server/web/cloud(s).
Ideally, I think that each user should only have access to a customized UI that contains only Controls/facilities appropriate to their Group/Role/Permissions, but that "KISS" principle may have to be ignored depending on the reality of software development in the "real-world."
I wrote a long response to another thread on this Forum about user-role permissions and UI; even though that's written specifically about using C# and Windows Forms, I think you might get something out of it [^].
«I'm asked why doesn't C# implement feature X all the time. The answer's always the same: because no one ever designed, specified, implemented, tested, documented, shipped that feature. All six of those things are necessary to make a feature happen. They all cost huge amounts of time, effort and money.» Eric Lippert, Microsoft, 2009
|
|
|
|
|
A recent exchange between _Maxxx_ and CDP1802 here: [^].
Resonates with an urge I have to "get my feet wet" in database programming in a scenario that is data-intensive, that does use a server for the database. I am familiar with writing "persistence mechanisms" in WinForms. And, not having used SQL, am eager to use C# and Linq primarily.
_Maxxx_'s statements about code-first DB's possibly returning a "glut" of data make me wonder if there isn't a way to have an "intermediary" app running on the server that takes Linq queries from clients as input, and returns highly-filtered data.
I am pretty sure this question's too broad without specifying the type of data involved: I am interested in data where there are many multiple-references/linkages across categories/objects. I have been investigating/studying DB's like Neo4j [^], but using that would take me off into Java-land where I do not want to go.
Appreciate any thoughts !
thanks, Bill
«I'm asked why doesn't C# implement feature X all the time. The answer's always the same: because no one ever designed, specified, implemented, tested, documented, shipped that feature. All six of those things are necessary to make a feature happen. They all cost huge amounts of time, effort and money.» Eric Lippert, Microsoft, 2009
modified 11-Feb-15 7:31am.
|
|
|
|
|
You don't need an intermediary app, besides all that would do would eat memory on the server, which I'm sure SQL Server would prefer to eat
I think in general the issue is that because you'll have a POCO with a collection of another POCO, people will typically just use that collection to get the related details, without realising that in the background these POCO's are likely mapped to tables, and you've just asked EF to do SELECT * from two tables But you can use linq queries to narrow down what you want and EF will then just bring back just what you want.
Its just 'harder', or at least less obvious that you need to do that. And its pretty easy to see the SQL EF generates, and have it logged so that you can check it.
I find that getting the state of EF object graphs the harder thing to grasp and visualise.
|
|
|
|
|
I appreciate your thoughts, and the take-away I have from your comments is to familiarize myself with EF.
thanks, Bill
«I'm asked why doesn't C# implement feature X all the time. The answer's always the same: because no one ever designed, specified, implemented, tested, documented, shipped that feature. All six of those things are necessary to make a feature happen. They all cost huge amounts of time, effort and money.» Eric Lippert, Microsoft, 2009
|
|
|
|
|
BillWoodruff wrote: if there isn't a way to have an "intermediary" app running on the server that takes Linq queries from clients as input, and returns highly-filtered data.
Yes but then all you are doing then is introducing yet another server that might have to deal with too much data.
Instead one should start with a design that applies the filter in the database and does it in an effective manner. Doing that reduces the load on the server and the data that needs to be returned.
|
|
|
|
|
Thanks for your reply, and I think I grok the very common-sense gist of your comment, which I interpret as "memory ain't cheap on the server, either." Hope that's not too far off-the-mark. Bill
«I'm asked why doesn't C# implement feature X all the time. The answer's always the same: because no one ever designed, specified, implemented, tested, documented, shipped that feature. All six of those things are necessary to make a feature happen. They all cost huge amounts of time, effort and money.» Eric Lippert, Microsoft, 2009
|
|
|
|
|
It isn't just memory - moving the data to another server requires work both for each server and the OS as well. And that is true even on the same box.
One of the worst solutions I have seen for applications is with designs that decided they were going to make the app database agnostic and the way they did that was to move all (all) of the business logic off of the database. That works for small volumes but is absolutely useless for large volumes when large volumes must be processed (in one case I saw they moved the entire database to a client box, processed it, then moved it back.) It couldn't scale at all. Probably could have scaled but they didn't design it that way. And processing on the database even if didn't scale for massive volumes at least would have work for the real volumes that their actual solution couldn't handle.
|
|
|
|
|
The ORM I have worked with yet (DataObjects.Net, code-first) allows you to specify which referenced entities / entity-collections you want to have instantiated, e.g.:
var query = (from customer in Query.All<Customer>()
where customer.Something == true
select customer)
.Prefetch(customer => customer.Orders);
I assume there's something similar in Entity Framework or NHibernate. This should cover the main point of your question regarding the "glut"..?
- Sebastian
|
|
|
|
|
Thanks, I will check out DataObjects.NET !
«I'm asked why doesn't C# implement feature X all the time. The answer's always the same: because no one ever designed, specified, implemented, tested, documented, shipped that feature. All six of those things are necessary to make a feature happen. They all cost huge amounts of time, effort and money.» Eric Lippert, Microsoft, 2009
|
|
|
|
|
My first reaction when you say that you primarily want to work with C# and Linq, is that you should take a look at RavenDB[^]. I wouldn't say it's a mature product as it has a fair share of gotchas, but it's made for C# and linq.
But then I read that you want to work with multiple-references/linkages across categories/objects, and then there's no shortcuts to take anymore. You'll need to learn a proper RDBMS (or maybe a graph database as you already investigated). And the vast majority of those use SQL.
As a first read I'd recommend Sanders article[^], it's covering the query language pretty well in a structured manner while still being easy to read. DML, DDL and indexing can wait for later.
As you're coming from the third generation programming world you'll now you probably notice that SQL is a fourth generation language. You need to change your approach to programming and stop telling the program what to do, but rather telling it what you want. You also need to start thinking in sets rather than rows or objects. So you need to brush off your set theory[^] knowledge
As a second step I'd recommend to learn normalization[^]. Since the whole concept of normalization reeks of buzzwords, that makes the whole concept rather hard to grasp for a layman, I'd actually recommend an article that shows how to do it instead[^].
If I should address the discussion you refer to, I should say that one needs to use the right tool for the right job. And when people fail it's not on choosing the tools but rather on defining what the job is.
What a database is good at is to Store, Retrieve, Filter and Aggregate data, and it's normally much better than anything you can think up in the business layer, it's optimized for exactly that. But that's about it. (If I may oversimplify it)
Math's, presentation, formatting and more or less everything else should be done somewhere else.
A common problem with ORM layers is that people tend to do the filtering and aggregation in the business layer, sending huge amounts of data forth and back between the layers. This is not a problem with ORM, it's an architectural problem and people not knowing how to use the tools. Putting the business layer in the database is just as wrong.
The most common example is the N+1 problem. Assume you have a Master Detail grid where the master has 1000 rows. Most ORMs works well when you use lazy loading but if you've done it wrong and load all the Details at once you end up with 1001 queries to the database. If you use lazy loading it fetches the data for the detail when the user clicks on it and the chance that the user will click on all detail views are usually pretty small, but in this case you should use eager loading and create the SQL yourself.
|
|
|
|
|
I really appreciate this very thoughtful answer, and will study it carefully. As I have studied the architecture and "gestalt" of Neo4j, I find I am very attracted to the model and its functionality, but do not want to get involved with all the graph visualization facilities. I discovered Neo4j while reading one of Marc Clifton's visionary articles here: [^].
I am a big fan of Mehdi Gholam's work, and have played with his RavenDB a bit. Long ago (1980's I took lecture-notes for a lecture-note-service for a young U.C. Berkeley Computer Scientist and lecturer, Mark Tuttle, who worked with Stonebraker (who later created Ingres and PostGres) on relational DB's, so I have had some exposure to "normalization," but not enough !
Since I am not constrained by financial considerations, I do have the time to spend on pursuing "out there" DB's, where relationships go beyond is-a and has-a.
Update: I have installed, and am studying, and using, the triple-store .NET graph-db BrightStarDB. Their excellent documentation and how-to's made it very easy to get a basic DB working (very refreshing, that experience).
My take-away from your very educational response is that I should do some basic study of SQL, first.
thanks, Bill
«I'm asked why doesn't C# implement feature X all the time. The answer's always the same: because no one ever designed, specified, implemented, tested, documented, shipped that feature. All six of those things are necessary to make a feature happen. They all cost huge amounts of time, effort and money.» Eric Lippert, Microsoft, 2009
modified 15-Feb-15 4:17am.
|
|
|
|
|
I believe you're thinking of RaptorDB, I meant Ayendes ravendb.net[^] which is a different beast.
BillWoodruff wrote: My take-away from your very educational response is that I should do some basic study of SQL, first.
I find it more important to learn normalization. The why's and how's.
Before one works with denormalized databases, one needs to know when and how to cut the corners without messing everything up.
But if you're into Graph databases this is not a problem. Normalization is quite built into them.
What in an RDBMS would be the relation between two otherwise (usually) pointless (as in not carrying any information) IDColumns in the related tables, is in a GraphDB an edge connecting two vertices, an entity of its own.
But still just two ways of doing the same thing. Normalization rules still applies.
BTW, Thanks for mentioning BrightStarDB, I didn't know about it.
|
|
|
|
|