Tag Archives: Sybase

SAPPHIRE and ASUG 2012 Orlando – Days 0 and 1

This week I am attending SAPPHIRE NOW and ASUG Annual two-in-one conference. There are many interesting developments going on in SAP in the areas of cloud, platforms, mobility, but my focus is broad BI/EIM/DM and this is what I am after at the conference. Co-location of SAP’s premier business conference and SAP users’ largest event gives a interesting – and sometimes polarized – view on the state of the business: you see cool HANA-powered mobile-steered 3-D demos on the show floor and you listen to the real customers’ far-from-perfect stories.

It’s been just two week since the end of my first SAP BusinessObjects BI 4.0 implementation project (my day life is a technology consultant), and the everyday life stories are closer to my heart than bright roadmaps. My first experience with SBO BI 4.0 was  polarized as well. On one hand the breadth of tools portfolio and user’s UI are way better than what SAP/BW could offer with its BEx applications. On the other hand, from the administration point I would expect much more from BI Platform’s CMC, as a single tool for administration, monitoring, and trouble-shooting. Yet all of these are a subject for a separate post. The point here is: with the realise  4.0 of SBO BI, I can easily say that it’s now at the stage where BW customers can and should start looking at the adoption of BusinessObjects BI tools as the front-end for BW data warehouse. Not completely there yet, but certainly incomparably better where it was three years ago. I ran into some discussions with fellow SAP Mentors over that statement. As I was complaining that we waited long 4 years for good BW/BO integration, my dear colleagues with pure BusinessObjects background, like Mico Yuk here, were complaining that SAP almost completely forgot about them and about innovations for the non-BW customers. What is your opinion?

The first ASUG education session I attended was by Jeff Duly on the topic of SBO Explorer, accelerated implementation by their BW shop. Besides few minor issues, the business response was overwhelmingly positive, just to confirm the statement that SAP’s decision to acquire mature BI tool set in 2007 was the right one (see also remarks above about the need to continue the integration and unification work).

The second ASUG session on my radar was the SAP HANA Ramp-up customer story. On the closer look, the story was rather about the benefits of BI implementation, as the customer moved from manual collection of data from four source systems and the development of reports in the MS Excel to the BI solution using SAP BusinessObjects’ WebI and Xcelsius as the front ends and SAP HANA database as the repository for the new data warehouse. What was extremely interesting in that session are lessons learned. I think they were extremely important to repeat them here:

  1. Data. SAP HANA database by itself helps with the processing speed, but without proper data quality, you will get garbage out just faster. I blogged about this a year ago in “Critical Success Factors for SAP HANA implementations
  2. Ramp-up means “bleeding edge”. Experienced professionals know the meaning behind someone’s sentence “We learned a lot during this implementation” 😉
  3. Realistic speed depends on complexity. In reality the performance of the query processing depends on many factors, major of which – when I/O bottleneck is eliminated – is the complexity of tables’ joins.
  4. UI rendering time. SAP HANA database dramatically improves performance in data retrieval, but it is still only one of few steps on the way between user’s request click and the final display. Accordingly to presenters, the times to transfer data from HANA to the end-user machine and then get it displayed could go up to 20 seconds – leaving the impression of “long processing” even when the query processing in HANA had taken only 2-3 seconds.
  5. Realistic time frame. They kicked off the project in May of 2011 with the plan of go-live in September 2011. But the acceleration of the data processing does not translate into the acceleration of project phases (although no doubt mean less frustration for everyone ;-), and the project went live 4 months later than expected.

The customer implemented the project with engagement of SAP Consulting and as I look around it is the case for almost every HANA implementation right now. If you are not a consulting arm in the one of the hardware vendors (HP, Hitachi, IBM etc) and if you are not one of the IT advisory companies (Deloitte, CapGemini etc) already working with the customer on the broad scale – it is difficult to get on the HANA implementation projects. It may be a cold shower for many smaller System Integrators (SIs). As HANA fewer spreads many of these SIs are establishing “HANA Centers of Expertise” and “HANA practices”, putting directors in place, making press announcements, but do not have projects… Here at the SAPPHIRE I met two of these “directors” already, who approached me asking how HANA implementation projects look like and what works and what not.

My first advise for them was: re-think what you want to do, analyse the broadening spectrum of HANA applications, and then try to focus. The thing is that SAP HANA world is huge and expanding and you cannot be in all places at all times. Especially if you are a small SI or a boutique firm.

My second advise was: SAP HANA itself is just an element of the broader SAP’s database portfolio. If you want to focus on the SAP database business, you need to make sure that besides HANA you can speak Sybase as well. If you want to focus on the SAP business applications powered by HANA (like Rapid Deployment Solutions – RDS), you need to speak the broad portfolio of SAP applications in the LoB or Industry area, which may not be powered by HANA today. If you want to focus on analytics, you need to speak broad portfolio of SAP BusinessObjects – BI and EIM – as well, and know how to build solution in SBO, which runs on HANA database, but as well on MS SQL Server, Sybase IQ, HP Vertica etc.

Going back to those who want to focus on the HANA as the database business with SAP, it is important that you separate hype from reality and understand the April 10th announcement of “SAP Real-Time Data Platform”.

5 Comments

Filed under BusinessObjects, BW, HANA, Rant, SAP, Sybase

Big Data and SAP HANA? Or Sybase IQ?

Like few more folks I think that there was some kind of misunderstanding in mixing Big Data and SAP HANA into one bag. We touched on this topic in the recent podcast “Debating the Value of SAP HANA”, but I would like to spend few more minutes here to explain my thoughts.

SAP HANA has been created with traditional SAP Business Suite and Business Warehouse (BW) customers in mind. How big is the biggest single SAP software installation in the world in terms of single-store data size? I do not know exactly. The times of the proud “Terabyte Club” are in the past. Four years ago it was loud about 60TB BW test SAP did. The biggest customer I worked with had 72TB database of BW data. So, I would assume that the biggest SAP instance is somewhere close to 120 TB. That’s still a lot of data not just to process, but as well to manage (think back-ups, system upgrades, copies, disaster recovery etc)… Besides current technical limitations – 8TB biggest certified hardware configuration and 2 billion records limit in a single table partition – SAP HANA is on the way to help SAP ERP and BW customers with those challenges. But those are not what the industry calls “Big Data”.

Here are main differences as I see them:

  • Data sizes we are discussing with SAP HANA are in the ballpark of few terabytes, while Big Data currently is something in single digit petabytes. E.g. HP Vertica has 7 customers with a petabyte or more of user data each accordingly to Monash Research.
  • Current focus of SAP HANA is structured data, while Big Data issues are generated by mostly unstructured data: web, scientific, machine-generated. Fair to mention though that SAP is working on Enterprise Search powered by HANA, as  Stefan Sigg, VP In-Memory Platform in SAP, told me during this TechEd Live interview.
  • Currently Big Data processing is almost a synonym with a MapReduce software framework, where huge data sets are processed by a big cluster of rather cheap computers. On the other hand SAP in-memory technology requires “a small number of more powerful high-end [servers]” accordingly to Hasso Plattner’s “In-Memory Data Management: An Inflection Point for Enterprise Applications” book.
  • Related to the point above is that in SAP HANA the promise is the real-time, where fact is available for analysis subseconds after occurrence. In Big Data algorithms processing is mostly batch based. My previous blog’s post became available in results of the Google Search and in Google Alert only 4 days after being posted – not quite real-time, huh?
  • SAP HANA data analyses are most often paired with SAP BusinessObjects Explorer – modeless visual data search and exploration. Use of MapReduce libraries on top of Big Data requires advanced programming skills.

During SAPPHIRE’11 USAkeynote speech Hasso Plattner mentioned MapReduce as a road map feature for SAP HANA, but since then I haven’t gotten any specifics what it means. Instead silently announced Release 15.4 of Sybase IQ has introduced some features focused on analyses of Big Data in their original meaning. Is there a silent revolution in SAP going on the Sybase side, while all eyes are on the HANA product?

5 Comments

Filed under HANA, SAP