Ask Slashdot: Choosing a Data Warehouse Server System? 147
New submitter puzzled_decoy writes The company I work has decided to get in on this "big data" thing. We are trying to find a good data warehouse system to host and run analytics on, you guessed it, a bunch of data. Right now we are looking into MSSQL, a company called Domo, and Oracle contacted us. Google BigQuery may be another option. At its core, we need to be able to query huge amounts of data in sometimes rather odd ways. We need a strong ETLlayer, and hopefully we can put some nice visual reporting service on top of wherever the data is stored. So, what is your experience with "big data" servers and services? What would you recommend, and what are the pitfalls you've encountered?
Skip Oracle. (Score:3)
Oregon Resident here. After the recent issues with Oracle..... yup. Not gonna recommend 'em again. Not a big fan of my tax money being wasted.
Re:Skip Oracle. (Score:5, Informative)
Financially - R is open source and free (as in both free as a bird, and free beer), so you don't need to buy it from Oracle. No doubt Oracle will make you buy their DBMS as well to work with Advanced Analytics, and a big server to run it on, plus support to get it up and running.
Technically - Oracle make a good DMBS for sure, but you don't need all the advanced features their DBMS is good at, such record level locking, three phase commit, redo logs, conflict resolution etc. You need that sort of stuff to maintain data integrity on transaction processing systems, but not for analysis. For analysis you just need a giant de-normalised table, and maybe indexes if you want to pick out specific subsets of records without full table scans.
Personally I use SAS. It's not sexy, but I have never found a dataset too large to handle. It will thrash the harddrive all night if it has to to get a result, but it won't crash. SPSS will definitely crap itself with even moderate datasets. Stata does OK, but even that can't handle the larger datasets. I haven't pushed R hard enough to find it's limit.
Re: (Score:1)
While your post is informative, it sort of misses the mark. Granted, TFS is clearly more on the clueless side, but you should have realized that 'analytics' here does not mean actual analytics, as much as simple BI reporting. The main requirement is ETL, reports are 'nice to have'.
Back to you post, it's nice to know about SAS. For SPSS you want to try and push as much processing into the DB as possible, otherwise you need to get the server version and throw hardware at it, the local server that comes with t
Re: (Score:2)
I don't get it. Why are you denormalizing your tables?
If you're talking about denormalizing, you're talking about a relatively complicated data set, else there would be nothing to denormalize. Almost nothing you'll do in SAS on any resonably complex data requires all the fields. So any DB on the back end (postgres, mysql) should be able to join up what you need from a well-normalized dataset quickly.
Or do you mean you're just making a big text file (or SAS data blob) and using that in SAS? If that's the
Re: (Score:1)
I don't get it. Why are you denormalizing your tables?
Most likely to simplify a star schema with many dimensions, it's a standard approach to keep your query run times relatively sane.
Re: (Score:2)
SAS may be the best answer to "query huge amounts of data in sometimes rather odd ways". Using SQL Server for storage is fine, but not using anything else in front of it (SSAS is useless) is bringing a knife to a gun fight. Trying to do everything in a relational way means tying a hand and a foot behind your back. The real world doesn't neatly fit the model, hard as you might try to make it, so performance suffers greatly and doing unusual ad hoc things takes longer to figure out. Get SAS to send pure relat
Re: Skip Oracle. (Score:5, Informative)
There was a crime, and Oracle was a willing accomplice.
First step (Score:5, Insightful)
The first step is to ask Slashdot a really vague question to a highly technical and expensive undertaking.
Re: (Score:1, Insightful)
This. Redshift is far and away the cheapest and most straightforward solution. Hooks up nicely with Tableau to help analysts, efficient ingestion.
Re: (Score:1)
Redshift is a fantastic way to get started... the kind where you end up not needing to migrate to something else.
Re: (Score:3, Insightful)
Get your feet wet (Score:3)
Personally, I think that the RedShift suggestion is perfect for OP. Judging by the vague requirements ("the big boss wants to get on the Big Data bandwagon!"), OP's company has no clue what it wants to do with its Big Data yet. So why throw down a ton of cash on a solution without having a good idea of what problem needs solving?
Playing around with RedShift a bit and seeing what value they can extract from their data would be a great pilot program. Later, once they know what they're doing, they can implemen
Define the goals (Score:1)
Define the goals. Don't mistake software for creativity and insight. If your company is going to crunch a lot of data find someone qualified to think analytically and recommend the correct tools for the job.
I hear that R is very upcoming in statistical work. I also hear that any other 'big data' solution is going to cost you as much as a full time employee anyway.
Also, yes, skip Oracle. If you put that much effort in to tuning a system/the way you're asking the question nearly anything could come up wit
Dear Slashdot, (Score:4, Funny)
Help do my job for me.
Re:Dear Slashdot, (Score:5, Insightful)
In general I don't mind such questions on Slashdot, as they're usually interesting and informative to the rest of us. And if they're not, then I (we) don't read the article!
Re: (Score:1)
It may not be fashionable in your circles, but human communication is, and will always be, a basic element of engineering.
Call up your favourite channel distie (Score:2)
The way you're going at it you're basically burning money. "We must have this big data thing too!" is every hardware vendor's eyes going "ka-ching" and you'll be overpaying whatever you do. Even if you think you're getting a good price.
The problem with big data as a thing (BDaaT) is that without a clear goal you'll be gathering too much data and storing it for too long. Thereby you "need" too much processing power to shoot through it, and the only way left is downhill. This creates myriads of problems, of w
Postgres-XL (Score:5, Informative)
Open-source so you don't have to cough up millions of dollars to see if you can get business.
Clusterable, scalable and standards-based so you're not locking down too far into one solution-space.
Not sure you know what you are talking about... (Score:2)
Not that I am a fanboi of Oracle, but ODI is a fantastic tool.
What would you recommend.. (Score:1)
do your job or go apply at mcdonalds.
Check out Amazon Redshift (Score:2, Insightful)
Pretty easy to try it out immediately... http://aws.amazon.com/redshift [amazon.com]
Re:Microsoft? (Score:4, Informative)
MSSQL?
why would anyone in their right mind go with MICROSOFT for a company database ? specially a big data database ?
I will not claim any "big data" experience.
At least you have an opinion informed by no experience.
Re: (Score:2)
I'm guessing he has experience with Microsoft, with respect to which his opinion is highly informed.
Re: (Score:2)
I've worked for large financial institutions and Unix is something that would be considered "small" in that environment. I've also seen departments shoehorn Microsoft products into a big problem and fail just to have to turn around and use something else.
Re:Microsoft? (Score:4, Interesting)
Microsoft doesn't win the real "Big Data" contracts, but there's many medium data contracts with delusions of grandeur. I work with a TB-size (as in, >1 TB...) database and while it's certainly no longer small data it's not "Big Data". It fits in a traditional RDBMS, when we get past the buzzwords what our users want are fairly traditional cubes/reports with drilldown that OLAP systems provide. If Microsoft is bad, the alternatives like Oracle, SAS, SAP or IBM are worse. Looking at an open source stack replacing the database is actually the easy bit, I'm sure we'd do fine running on PostgreSQL or MariaDB. Reporting tools on par with Reporting Services are also easy to come by. I've seen nothing as user-friendly as Integration Services on the data flow side which we use a lot, but I guess we could use it with foreign sources and destinations too.
Probably the biggest lack on the data warehouse side is an open source OLAP server. The wikipedia page lists two, one is Palo/Jedox which is a very limited marketing version for their commercial product and the other is Mondarian which by closer inspection seems to just translate MDX to SQL and let the RDBMS database do the aggregation which I suppose is okay for small data sets but will choke on any significant volume. Basically it comes down to all the Microsoft tools being "good enough" and working nicely together, while the rest ends up being a mix of different pieces from here and there. Either that or you're looking at a whole different stack, and I got lots of requirements that'd make a NoSQL solution squirm.
Re: (Score:1)
For a more SQL-oriented approach with open source, take a look at the madlib library that extends PostgreSQL with user-defined types and many stored procedures for in-database analysis. It can also be scaled up with $$$ by running it on Greenplum instead of pure PostgreSQL, but you can go a long way with PostgreSQL on a modern, commodity server with large RAM (256GB...1TB) and/or fast disks (hardware RAID and/or SSD). You may be able to focus more funds on this hardware rather than a bunch of software lic
Re: (Score:2)
Re: (Score:2)
Re:Microsoft? (Score:4, Informative)
I recommend against MSSQL not because it's not a good DB (it is -- it was originaly Sybase) but because it's cumbersome to work with outside of the Microsoft ecosystem. You mainly interface with it using ODBC and that's a pain outside of Windows. You're stuck with windows boxes on the back end AND on the front end. You can add ODBC systems to the mid-layer/server boxes you'd rather have (Linux, usually) but now you're paying money to add a kludge. Furthermore, because it absolutely needs to run on Windows on the back end, you have to pay employees who are generally of the sort who are going to want more Microsoft tools, so you'll be creeping more and more away from free stuff which is easy to maintain to a bunch of licenses and a complex setup. (Had to get a bunch of Windows boxes set up with precisely this sort of issue just a few weeks ago -- man! was it painful.)
You could start your project with Postgres and find out why you're unhappy with it and plan for a migration to something which is better for you post-hoc: Don't write SQL procs, and don't weave your SQL through a whole lot of code. Though frankly, the suggestions for Red Shift seem right on the money. They use Postgres drivers, JDBC, and ODBC, so you're set on any platform you want to work on without any added cost. They have a two-month free trial. You could try that out first and figure out what you're unhappy with there as a first step. Same rules apply -- keep things simple.
DBs are not for chewing data -- they're for giving you just the data you need so you can chew on it. You use the right tool for the chewing job once you have the data. (Some DB pre-chew is fine in situations where it's efficient and easy -- group by's, mostly.) So it doesn't matter that much how long the feature set of your DB is. What matters is that it's fast and you can get data in and out of it just about anywhere you want to. I've seen shops where they do all their data chewing in SQL server. They write reams of ugly, ugly code. They do this because they know how, and don't realize that a little work learning other things would make them vastly more efficient. The thing to always remember is that you don't buy a hammer and assume everything is a nail. Buy something which works with lots of other tools and pick the right ones for your job.
Re: (Score:3)
I recommend against MSSQL not because it's not a good DB
I'm assuming this is based on your extensive MSSQL experience, right?
but because it's cumbersome to work with outside of the Microsoft ecosystem.
Well, at least MS has half-decent tools for it. Other than Oracle, they're the only player with a decent GUI interface.
You mainly interface with it using ODBC and that's a pain outside of Windows.
Or JBDC. It depends. Its not really Microsoft's fault if it doesn't work in your environment, is it?
You're stuck with windows boxes on the back end AND on the front end.
You just summarized 2/3rds of the corporate world.
You could start your project with Postgres and find out why you're unhappy with it and plan for a migration to something which is better for you post-hoc: Don't write SQL procs, and don't weave your SQL through a whole lot of code.
You could, and then figure out how to integrate it with your environment. And *WHICH* ODBC driver to choose. So, the pain you just described previously, its right there. Wit
Re: (Score:2)
I recommend against MSSQL not because it's not a good DB
I'm assuming this is based on your extensive MSSQL experience, right?
Yes, it is.
You're right on the replication. I think that's Postgres's obvious weak point. It's what you'd find that you didn't like. I assume that's why you ignored Red Shift. The rest of your arguments simply prove my point.
Re: (Score:2)
It's what you'd find that you didn't like.
It still is a huge limitation, as you cannot easily sync a local dataset with a remote one.
I assume that's why you ignored Red Shift.
RedShift has a limitation of 16TB per node. Its nice, but not really "big data". Its more like RDS on steroids, and I think RDS is "so-so". Also, you either use sync from/to amazon interfaces, or you're stuck with JBDC, so basically the same limitation you mentioned apply.
Re: (Score:2)
FreeTDS (Score:1)
FreeTDS works well. Why would you have to use ODBC?
Re: (Score:1)
DBs are not for chewing data -- they're for giving you just the data you need so you can chew on it. You use the right tool for the chewing job once you have the data. (Some DB pre-chew is fine in situations where it's efficient and easy -- group by's, mostly.)
Seems like there aren't many responses here talking about columnar databases [wikipedia.org]. This a class of relational databases very well suited for data warehousing. I have been working with Vertica, which is a proprietary technology, but the license terms are much more favorable and fair than what you get out of Oracle (they aren't comparable anyway). It's a mindset change when you get into columnar databases, but on the whole they can be simpler than what you get trying to tune a traditional relational database fo
Re: Microsoft? (Score:2)
Hadoop (Score:2)
Don't waste your time and money, just go with Hadoop.
Need ETL? Well for one there is PIG, but if you want to do stream processing Apache Storm / Kafka.
Take a look at this, http://hortonworks.com/hdp/ [hortonworks.com]
All completely Open Source.
Re: (Score:3)
And yet, data warehouse data off-load and outright replacement is one of the more popular Big Data applications right now.
The main driver is the prohibitively expensive storage, user license and "per core" cost of traditional databases.
There's also a fundamental questions hidden underneath the big data vs BI dilemma: how do you model against requirements (Kimball) when you don't have the requirements yet and you still want to keep all options open? Another one is how you can successfully open up PBs to end
Re: Hadoop (Score:5, Informative)
Because that kind of setup works mostly for highly specialized requirements, such as processing ad clicks or log files. That's totally different from a data warehouse, where you store a lot of data with the idea that users can do a bit of exploration and analysis on their own using client tools like Excel, Tableau or MicroStrategy.
There's 3 kinds of setup for Big Data:
1) Massively parallel processing, such as AWS Redshift or Google Big Query (or IBM Netezza if you have money). Those are regular databases on steroids and they let you query data on your own. Redshift is basically a huge multi-tenants Postgres cluster.
2) MapReduce, such as AWS EMR. This is more or less a clunky kind of ETL where you need to code every single question to which you want an answer. It scales well on the volume side (because of Hadoop distributed file system) but it is extremely tedious to implement and offers zero self-service capabilities for data analysts beyond what is hard-coded in your setup. The ETL language from Apache, Pig, is very basic - for just about everything you need to fire up Eclipse and write Java code. There are a few SQL frameworks that can sit on top of Hadoop, but none are blazing fast or immensely reliable, and for the most part with those SQL solutions it ends up being a cheapskate alternative to a proper DW.
3) Machine learning, such as Spark or Mahout (also based on Hadoop file system). Those also require extensive programming and typically won't offer clear answers, they are mostly useful to find trends or patterns. It's all the rage right now with "data scientist", just like MR was all the rage 3 years ago and did not really stick because it's too clunky. Again this is a scenario where you know what you are looking for, because you have to "train" your system for specific tasks.
HortonWorks is an all-inclusive Hadoop setup that includes most of what is needed for #2 or #3, but since AWS and Azure offer for pennies a totally scalable Hadoop environment, in my experience HortonWorks is for companies who want nothing to do with the cloud or for total newbies who want to see what is that Hadoop thing. But it does not offer the benefits of letting you learn what are the moving pieces because it comes all configured.
So unless you have a very specific set of reports of indicators and a shitload of data, the only serious answer is to keep doing what BI people have been doing for decades: build data warehouses and use a decent front-end that includes a flexible reporting platform and self-service capabilities (such as OLAP). And only if you have tons of data should you even bother with Big Data products, as none of those are cheap. Redshift is in the $1000-$5000/TB/year range. For a large organization that's nothing, but for some guy trying to start a vague BI initiative that's expensive.
When it comes to non-Big Data BI (i.e. something to setup on a few servers at most), the options are the following:
1) SQL Server and its built-in BI suite, or Oracle and its built-in BI suite. A bit expensive but very flexible. Not ideal for self-service unless you have experienced DBAs.
2) Any RDBMS + IBM Cognos or + SAP BusinessObjects. Expensive but you can define data universe then let users build their own reports. Ideal for self-service and for situations where you don't have a full time DBA who can write queries or build OLAP cubes.
3) A patchwork of FOSS: MySQL, Mondrian, Jasper, Talend, etc. Free but not integrated so it requires a bit of work.
Big Data != BI. It just means that you have more data that you could process on a regular database cluster. Even with social networks, ads and blogs, I haven't seen that many situations where this is truly needed.
Re: (Score:3)
I'd actually say that I consider the MapReduce only focus as a limitation of Hadoop, but the fact that so many other tools have been built on top and so many things integrate is definitely a huge asset in its favor.
Most of the tools built on top of Hadoop use HDFS (the Hadoop filesystem) and no Map Reduce at all. I think you are a textbook example of someone who learned Hadoop by using HortonWorks and therefore has no idea what are the various underlying moving parts.
Re: (Score:2)
And if you want visual drag and drop ETL development and orchestration, use Pentaho Data Integration (a.k.a. Kettle). Comes in open source with an Apache license or professionally supported. Supports visual Map/Reduce development, integration with pig, scoop, oozie, ...
For SQL you can use Hive but try one of the alternative engines like Impala as well.
Re: Hadoop (Score:2)
If you want your Hadoop cluster to be fast and easy to use, go with Spark https://spark.apache.org/ [apache.org].
Re: (Score:1)
Analytics + mssql = fail (Score:4, Informative)
Whatever you do, don't go mssql as you will end up processing most of your data in the analytics tool.
I've seen it lock tables even on only reads causing other processes to be terminated.
The closest it has got to materialized views are clustered indexed views which suck and can barely do any processing.
Re: (Score:1)
Yeah, I use only the sql side, and I really hate ssrs and I never use ssas.
SQL Server reporting services is great, until you want to do something really cool and complex, and then it is a hellish wasteland of tears.
I WARNED YOU!
Re: (Score:2)
Re: (Score:2)
I've seen it lock tables even on only reads causing other processes to be terminated.
That's because someone who does not understand how the product works has configured a serializable transaction isolation level. I would suggest to RTFM but maybe you need to start with the basics: http://en.wikipedia.org/wiki/I... [wikipedia.org]
Unlikely scenario (Score:2)
In SSIS (the ETL tool that comes with SQL Server), the default isolation level is serializable. People often use SSIS to stage data and/or feed a denormalized data warehouse.
Someone claiming that an analytics tool is causing locks in SQL Server does not know what they are talking about. The most recent BI engine from Microsoft (Tabular) does everything in-memory, and with the older one, which is OLAP-based, data is typicalled moved out of SQL Server and into a SSAS cube.
There's the possible scenario of some
Re: (Score:1)
Whatever you do, don't go mssql as you will end up processing most of your data in the analytics tool.
Why?
I've seen it lock tables even on only reads causing other processes to be terminated.
Try enabling snapshot isolation if you want MVCC
The closest it has got to materialized views are clustered indexed views which suck and can barely do any processing.
Try columnstore indexes if you want your mind blown.
we need more detasils on this "big data thing" (Score:5, Informative)
Big data is an entire field of study, this is not "should I use vi or emacs or nano" and even that requires a shitload of context and the source of flame wars until the end of time.
Think about your budget, your audience, and the value that you can add by spending time and money on this.
MapReduce (hadoop) is awesome and open source, you can run it in house or in multiple cloud offerings and has a tremendous community. BUT it sucks at relationships (foreign keys) graph calculations and others.
Graph databases can make connections between things that are impossible in other systems, but are only good for graph relationships.
OLAP data stored in n-dimensional cubes allows reporting and analysis if familiar tools that many analysts (not programmers) think is the cat's pajamas.
Your best be is to slow down and talk to your users, while reading Seven Databases in Seven Weeks
https://pragprog.com/book/rwdata/seven-databases-in-seven-weeks
And then realize that you probably need to hire a consultant so you have somebody to fire when the whole thing goes south.
Re: (Score:2)
I'm out of modpoints but I would like to stress that _this post_ is an example of why Ask Slashdot is so successful at answering questions that boil down to "I don't know what I need to know to get this job done". This is the type of answer that will put the OP on the right track to figuring out what he needs.
Re: (Score:3)
Plus the strategic element of bringing in a consultant. Outside expertise is valuable not only for the expertise, but also because of other less tangible benefits. The outside guy is always more trusted by the business units. It is just human nature. You can lecture everyone on the benefits of some new initiative until you are blue in the face and get nowhere, but bring in a consulting firm to say the same thing and everyone suddenly thinks it is a great idea.
The same goes for having a scapegoat when th
That isn't big data (Score:5, Insightful)
If the data fits in a database, it is not Big Data.
Re: (Score:1)
You'd make a great CIO
Re: (Score:2)
There are companies with multi-PB databases (PB, not TB).
Apache family (Score:3)
As a previous poster pointed out, there is also PostgreSQL, again FLOSS. Again downloadable and playable-with.
But what do you need? (Score:5, Insightful)
Sounds like you're very good in the buzzword-department but have no idea what you're doing at all.... What kind of data are we talking about? Lots of writes? Lots of reads? Is the data suitable for splitting up? What kind of queries will you need to run? Do you need uptime? Or consistency?
Also if you're looking at MSSQL or Oracle, you obviously DO NOT HAVE Big Data. Big Data is data that cannot be dealt with using regular RDBMSes. Do you really have or plan to have multiple terabytes of data? If not, you don't have big data.
Based on the information you've given us we cannot give you any advice at all apart from stopping what you're doing and hiring an expert.
Re: (Score:1)
My thoughts exactly. This question is stupid.
Re: (Score:1)
Exactly
Big data is a different thing from datawarehousing.
In a big data scenario you have lots of data, that you process with a highly scalable solution.
In a databasehouse you collect data from different sources and transform them in several steps to a datamodel you can create reports from it in a simple way. .
And there is the other option you just have to process lots of records from a simular source (measuring data), where you carefully monitor and tune the processing of that data.
The question does not e
Re: (Score:2)
The key problem is most business run into their own insecurities.
They are afraid of picking the uncool system that in 5 years would be scoffed at.
Such as creating a new web app in Perl, nothing technically wrong but it isn't cool anymore.
It's the no one has gotten fired for choosing IBM. It is more about picking the name that suppose to impress your customers. Not what is best for the job.
Re: (Score:2)
"Big Data is data that cannot be dealt with using regular RDBMSes"
Let's say: you may need or use a "regular" (RDBMs) for some things in big data, but it's not going to be your "Data Warehouse".
Re: (Score:2)
Open source (Score:2)
You should look at IBM Bluemix. I've heard good things about it. Watson integration.
You are looking for the wrong product/service (Score:2)
You're asking the wrong questions.You should start higher up the chain in business-value land - WHYdo you need a data warehouse system (to run analytics)... great WHY do you need to run analytics (to discover XXXXX from the data we generate/own/handle). OK now you're getting closer... now, armed with the knowledge about what data you will be storing, and what kind of insights you would like to generate, you need to approach a spe
Re: (Score:2)
Re: (Score:2)
Right now, if you are starting with "Data Warehouse" you probably are using the wrong answer key to score your wrong questions.
Good grief... (Score:3)
If your company buys 'big data', I have a bridge to sell you.
Know your data. Don't build a castle in the sky; that's how SAP happened.
You must follow the correct process. (Score:5, Insightful)
1. Hire some bonehead that is expendable and ask him to make the decision.
2. Fire him when the project fails.
3. Nobody will ever bring this up again.
Re: (Score:3)
Have a look at Teradata (Score:2)
The company I consulted for uses SAS (on the mainframe, AIX boxes, and PC's) for almost all of its dataprocessing needs, including ETL work. Now they're looking at "Big Data" and discovered they need parallel processing to make it cost-effective (outp
Re: (Score:2, Informative)
Former back-office Teradata employee here. Teradata makes a very powerful product, but if security and availability of your data is critical, then I would look elsewhere. I'm not going to divulge any company secrets, but I will copy some snippets from employee reviews on Glassdoor:
"Security is nonexistent. LAN credentials are sent in plain text (unencrypted) everywhere... CUSTOMER credentials to CUSTOMER systems (IP addresses and credentials) are sent in plain text (unencrypted)"
"IT outages are frequent,
Finally... (Score:1)
ElasticSearch, Logstash, Kibana (ELK Stack) (Score:3)
The ELK Stack might be an option. In my field, (many) web servers can stream all their logs off-site in Real-Time using Logstash Forwarder (or instead they might use rysnc, or rsyslog, or...). A central server, in the secure private intranet perhaps reads and indexes this log data, (that's ElasticSearch, which is sort of like a personal Google for your logs, any logs of any kind, or other Big Data). Kibana is a user-friendly Angular.js application and presentation layer. If you're familiar with NewRelic for server monitoring, you can save views just like when using that tool.
http://jakege.blogspot.nl/2014... [blogspot.nl]
Okay, maybe this is sort of like 'when all you have is a hammer, everything looks like a nail', but this suggestion is the extent of my background in this area. Although I have had an itch to scratch, and so far, this is my best open-source result.
There's a ton of citations you should search for yourself, but I'll provide one I found that might start to help. Using this tool, it is fairly easy to parse out the myriad of hacker efforts at attacking the servers for example; even when you're the NY Times.
Re: (Score:1)
Oracle users hate them... (Score:2)
I know a few. They are all looking at options to get rid of Oracle, and often of Solaris as well. On the other hand, MSSQL is still basically a toy. It really depends on you data model and the queries you run. Key-value stores ("no SQL"), for example, are really easy to distribute over many servers.
Exadata (Score:1)
First step - hire a consultant (Score:2)
This is a MASSIVE undertaking, requiring deep and profound strategic decisions to be made at the highest levels of the company/organization.
To go all in on what advice you might receive from slashdot is fool hearty at best.
Do yourself and your company a favor, hire a world class consultant to come in and provide some advice.
dont do it (Score:2)
If you need to ask this question on Slashdot then chances are you don't have the skills to build and run such a system properly.
Microsoft APS(formerly PDW) (Score:2)
I would strongly recommend looking into Microsoft's Analytics Platform System(APS), Formerly Parallel Data Warehouse(PDW). It's an MPP appliance that combines PDW and Hadoop. I got to spend a week on one of these appliances recently and I can't wait to get back on it. It supports combined queries usinf polybase across Hadoop and the Data warehouse(as well as the cloud).
Typically data scientists will want to work in Hadoop and use R,
Re: (Score:1)
Re: (Score:2)
With that said, I do understand that IT infrastructure can get a little butt hurt about installing appliances. Like I said, I've been doing tech for 30 years now and one trend I have noticed is that a lot of IT departments have drifted away from the customer
Re: (Score:1)
Go to Big Data Meetup in your area (Score:2)
You can find out a lot in a few hours just by going to a Big Data meetup. Traditional database vendors are trying to hijack big data and make it their buzzword. Real big data players are using tools like Hadoop, Spark, Solr, Elastic Search and other tools that allow you to use commodity hardware to get a much more performant platform for big data. The appliance vendors have some interesting off the shelf stuff... you should really take some time to see what is going on... it's wild west time.
Greenplum or Redshift (Score:2)
We use http://en.wikipedia.org/wiki/G... [wikipedia.org] which is a clustered Postgres implementation. It has its problems (Postgres 8.2? seriously?) But it is very fast for ETL and batch queries on large data sets. We house 100+TB and get excellent performance. Its commercial and you pay by the TB.
Then there is also AWS Redshift. We have found it to be quicker at some things and possibly cheaper but immature in its feature set (no UDF, etc). The thinking here is that if you have a separate system for ETL, Redshift would m
Are you sure its Big Data? (Score:2)
Don't confuse a regular data warehouse with Big Data. If Big Data is a "thing" your company wants to get into, it probably does not apply to you.
As for your data warehouse, MS SQL Server and is a good enough base to start with. IBM's DB2 is another underrated platform. Don't feed Oracle please.
Nature of the load makes one thing simple... (Score:1)
A complete response would take too much space... (Score:2)
This is a wildly nontrivial question. Volumes are written about building data warehouses, and there's a lot to consider. In a large complicated environment, you could spend weeks doing comparisons (some people spend years, but that seems extreme); and some of the decisions are worth weighing.
The first question is what capability are you looking for -- why are you sure one of these vendors is correct, and have you truly explored your options? If you want a place to capture and gather lots of near-real-tim
You're dropping nickels all over the place! (Score:1)
Insight from a Few Years Experience (Score:1)
No offense, but from the sound of it you have no clue about a BI infrastructure, which is what you're talking about. If your company is serious they'll hire a team of 10 people w/ an average salary well north of 100k and have a couple million dollar budget per year for IT systems, including an analytic data base, ETL system, and BI application.
My guess is that you just want to start off by incrementally building a DW and want ad hoc analytic capabilities. My proof of concept solution would be to use Penta
You've got one shot to store your data (Score:1)
When a piece of data come in, store it everywhere you need it. This might be aggregated tables (if you don't use indexed views) or whatever you may need. If you have background processes like ETL, you'll use a lot of your hardware for processing at the expense of queries.
Avoid ETL. You've got one shot to store your data everywhere.
Re: (Score:2)
ooh you 'orrible cunt!
Re: (Score:2)