Unifying Modern and Enterprise BI | Dynamics 365 for Power BI

Unifying Modern and Enterprise BI | Dynamics 365 for Power BI


>>Self-Service BI
and Enterprise BI. As I said, Power BI
origins are, if you use, how many of you use
Power BI Azure? All of you, good. So- You go to Power BI desktop. You go and you get
data from everywhere. You create a model. You create your
report visualization. All of you can do that. All of your business
analysts can do that. Everybody can do that. This is Self-Service BI. This is that ease of use, the
[inaudible] , the free desktop, is what made Power BI as
popular as it is today. But as I said, IT wants more. IT wants to be able
to go and not just do this small near farther
but small BI models. They want to go big. They want to put
the full breadth of semantic models into Power BI. That requires much more
complex models, more measures, more table, more scale, much more data.
Well, we are lucky. Power BI’s foundation
is built on top of decades of IP that we
have in Microsoft. I’ve been working at
the SQL server organization for almost two decades building
SQL Server Analysis Services. How many of you have used
Analysis Services before? Yeah. So, you know what
they’re talking about. This is the work horse
of the BI industry. This is what we
really democratize BI at bringing it to every company because we just lower
the cost of BI servers. You know what, the kind
of power we have there, it is the backbone
of Power BI today and what we’re doing is
opened the full power of Analysis Services into
the Power BI users. We focus on making it
simple for the BI users, for the data analysts,
but no more. Well, it’s really
going to open up all the floodgates to get all the features, all the API’s, all utilities working without
an Analysis Services, that is the bedrock of Power BI. How many of you are using
Reporting Services? SQL Server reporting? Well, yes. That’s the other big
engine that we have. That one is also
coming to Power BI. We’re bringing Enterprise
Reporting to Power BI, which means you can
get paginated reports with headers, with footers. You can have it, you don’t have just the fun slide side report
that we have in Power BI, but really multi-page so that you can have
product catalogs, and orders, and invoices,
and inventory list. Everything that you’re used
to from Reporting Services, all of that is going to Power BI. You’ll be able to schedule it and to send it to
people’s mailboxes with personalized slices, and under certain conditions, all the stuff that we
need in order to really get Enterprise Reporting
into the Enterprise. That’s not all of it because when you ask the guys
in IT what they really, really care about, what makes them not fall asleep at night? It’s all about governance, security, the thing
that could go wrong. We’re putting a massive effort around the whole notion
of Enterprise governance, management, and
control in this year. So, we’re going to work on things like multi-geo deployment. If you came from
around the world, you know that there are
lots of regulations where the data can be stored. If you’re in Germany, the data
has to stay in Germany. If you are from China,
it has to stay in China. We now allow you to
deploy Power BI in a way that the data will reside in the region that it
needs to reside in, to make it sure that
it’s compliant. So, Power BI is
the single system image. We’re one Power BI system, global, around the world, but it’s made out of multiple data centers and
you can actually, as IT, configure which data
center the data will reside. Still collaboration is, there’s no silence, you can collaborate. Everybody from any region can access the data if
they have permission, but you can make the compliance requirements
of the regulators. Lineage and impact analysis, a lot of auditing work
that we’re doing. A lot of work that
we’re doing around Certified Datasets and around the Application
Lifecycle Manager. So, let’s just focus on
the last two for a moment. Certified datasets:
Everybody can create data, but some data is better than
other data, we know that. There’s data that
people work to clean, to enrich, to
verify, to validate. What we’re going to provide is for that data that
you guys have created, the IT blessed data, you could go and certify it. When you certify that data, you put that certified tag on it. Whenever your business users
go to create new report, say I want to connect to data, the first dataset that
will show up floating to the top will be
your certified datasets. They’ll know which data is best. It will have a special tagging, special way for them to know that this is better
than the other data. So, encourage them to find it, the system will
help them find it, help them make sure
that they use it, and you’ll know
what they’re using, and you can see that they’re
using the right set of data. Certified data, you can see the slider there, very visible. Just create a data, certify it and all your user
that have permission, it will float to
the top of their list. ALM: We build those big
systems we have today. So, it’s kind of stunning
how in the last year, as Enterprise start to
adopt it more and more, we move to the state where if a report is down or BI
application is down for an hour, we get a call from the CIO
saying, “How dare you? I have thousands of
people who are relying on those report to run
their business.” BI becomes
mission-critical which is something I had
never seen before in decades I’ve been working
in the BI industry. BI is now becoming mission-critical when
the system is not up, when the system is
showing wrong data, when the system is slow, the CIO knows about it. The organization is disrupted. So, we need to start managing
those application the same way that we manage any other mission-critical
application. It means also a proper Application Lifecycle
Management system. It means that there
is going to be a development environment, there going to be
a test environment, there going to be
a deployment environment. You’ll be able to move
the artifacts that you’re developing
from one environment to the other in
a controlled manner. You can do this to
see the differences. You can decide which
measures you want to move and which measures you
haven’t finished developing, and keep them in the other side. All of that is coming. This application is going to show up in the next couple of few weeks when we have another thing that
is coming which is, of course, the APIs. The XML for analysis API, that is a API of Analysis Services that is allowing us to connect
to all the tooling, SQL server management studio, SQL server profile, all theses stuff is going to
work with Power BI as well. The whole wealth of the Enterprise stat is going
to work with Power BI. To demonstrate some of that, I’m going to invite
Christian Wade to the stage. I want to see both Enterprise Reporting and Enterprise Model.>>All right. Hello, everyone. Very happy to be here.
Let me just get. Okay. So, I’m going to
start with complex models. With ENET services,
ENET services has been around for
20 years as Amere said. It’s really the enterprise business intelligence work horse
of the industry. With ENET services it’s not
uncommon to have models with over 100 tables
in a single model. So, we are introducing
a new diagram view. Sorry, just one second. All right. We’re introducing
a new diagram view that will scale to
hundreds of tables and allow users to break out these models by subject area, and it just allows the management of
these much larger models. You just have to excuse
me here, just one second. Sorry, just getting loaded here, wasn’t expecting this.
Okay, here we go. This model as you
can see just loaded, in a few seconds its
got almost 100 tables. We can break it out by subject area and
there you can see. So, here we have
a separate subject area and another one here. We can create new layouts, and I’m going to find a table. We can add related tables. So, as you can see, we’ve
got a nice new diagram view for Power BI desktop. This will be shipping very soon. It really allows us, it allows the business
intelligence professional, makes the life of the business intelligence professional
much easier. We can also multi-select objects and set common
properties in one go. So, in this case, you can see I have three
columns selected, I’m going to set
the display folder. Display folders are a feature
from Analysis Services, the zooming isn’t working
a bit well, excuse me. So, display folders is a feature
from Analysis Services, it’s been around
for over a decade. It is precisely to
allow consumers of these complex models to find what they’re looking
for much more easily. So, we now have
these large models, there’s a lot of complexity, there’s lot of interdependency, and we have these techniques to make them easily consumed. So, that’s the new diagram view. The next thing I’d
like to talk about is, that we recently introduced, a feature called
composite models. This is really
a game changer in terms of what you can do with DirectQuery in
many different ways. So, for this particular table, even though the zooming isn’t working very well,
but let me try. So, you can see here, this table is a DirectQuery table and it’s getting
data from HDI Spark. Traditionally,
DirectQuery which will federate the queries to the data source on the fly as the users
interact with the visuals. Traditionally,
DirectQuery was only for a single data source. Whereas now, we can have multiple DirectQuery data
sources in the same data set, which opens up lots and lots of new scenarios that were
not possible before. Additionally, we
also can now have imports and DirectQuery combined in the same datasets. All right. This is really a game changer. Whereas before, the whole
dataset had to be imports, you can now pick and choose
the individual tables to cache to get that
blazing fast performance for your users where
it really counts. You get a much bigger bang for your buck in terms of
the resource usage and memory consumption that is used by caching
the data into memory. So, this is composite models multiple DirectQuery sources and combining imports and
DirectQuery in the same dataset. This is huge. We’ve just
delivered this in preview. Now, what I’m going
to show now is a feature that builds on
top of composite models. This feature is truly, I don’t know if you’ve seen this because the demos gone viral, but this feature is truly a game changer in
terms of scalability. This is actually probably the biggest scalability
feature we’ve, pretty much you could argue
that we’ve ever delivered. I mean, this one is
absolutely huge. So, I’m going to
switch over to here. So, this dataset is data for a crowd sourced
courier service where a smartphone app emits
the drivers locations. As you can imagine, it
generates a ton of data. So, this measure, as
you can see here, I’m not sure whether
you can see here, this measure is simply giving us the counts of all the rows
in the tablet. So, let’s find out how many rows
we have in this table. So, I’m going to drag
this onto the canvas. Just to put this into context, I did a keynote at a conference last year
and I had one of the biggest Azure Analysis
Services skews with 400 gigabytes of memory
and over 25 cores. Just a huge dataset and
that thing was 10 billion rows, and that was really
pushing the boundaries in terms of what you could scale to in terms of an enterprise business
intelligence platform. Well, this dataset is
actually a trillion. This defies the laws of physics
as we know them in terms of what is enabled for
interactive analysis. You notice I just got instant response times
on this data. This is a trillion rows of data. This is actually quarter of a petabyte data from HDI Spark. This is something that, if you’ve ever watched
these big data demos, this is the first time that I’m aware of at least
that anyone can have this kind of interactive analysis over such a large data set. So, this is a trillion rows, I’m going to go with
this measure travelled distance. I’m going to break
it out by date. I’m going to make
it nice and big, I’m going to make it a bar chart. I’m going to break it out
by the miles per job. I’m just getting instant response time, just
clicky clicky, drag-and-drop a data analysis
over this massive dataset. It’s just instant response
times, instant gratification. I’m now going to filter this data just by the drivers who
left the organization. I’m now going to generate another visual here and make it a table. I’m going to filter this
just by December 23rd. If I selected here, and now I’m going to bring over the driver name to this visual. So, just like that, I’ve created a list of users
who works on December 23rd, subsequently left the company, perform jobs of over 50 miles over a quarter of
a petabyte of data. How is this physically possible? With Power BI, we achieved this blazing fast performance by caching data into memory. Now, that’s why
these Hibernate sub skews have so much memory. Now, even without
compression ratios, quarter of a petabyte
is quite a lot of data. So, what we’re doing
here is we’re caching the data at the aggregated level, which unlocks
these massive datasets in a way that was physically
impossible before. You find that you should get very high cache hit rates because the percentage of
business intelligence queries that are at the aggregated
level are very, very high, probably
90-95 percent. If the user happens to drill down to the detail level
where there is no cache, so in this case,
I’m going to drill through to Abigail Johnson. We’re now going to switch
over to this other report, so this is going to
take a little bit longer because this
is going to plot the individual locations for Abigail Johnson on December 23rd. So, it’s now going to select a few 100 rows from
the trillion row table. So, there is no aggregation
happening here. So, it’s not hitting
the aggregated cache. So, what’s it going to do? Because of now, we
have this hybrid DirectQuery and import
combined in the same data set, it’s just submitting
a DirectQuery to, in this case, HDI Spark. This will work on
HDI Spark, on Databricks, and what’s an address SQL
Data Warehouse, and many, many other DirectQuery
sources that we support. So, we can see
this query is running here for Abigail Johnson, and obviously this is
a trillion rows of data, this is 23 node
HDI Spark cluster. It’s quarter of a petabyte of data, it still
takes a little while. Generally speaking, the data warehouses and
these big data systems tend to handle these non-aggregated filtered queries
quite well, right? You’re using Power BI for
what it’s really good at, which is these
aggregated queries. You’re using
the Data Warehouse with a big data system for
what they’re better at, which is the non-aggregated
filtered queries. So, here we can see
Abigail Johnson drove all the way across Memphis just
before the holidays. She had multiple jobs. She got frustrated,
she left the company, maybe she started
her own company, she’s doing very well now. The point being, we unlock
core of a petabyte of data, we enabled interactive
clicky-clicky-draggy-dropy style interactive analysis over
a quarter of a petabyte of data in a way that was
physically impossible before. This is a real game changer. This truly will transform interactive analysis
of a big data. Thank you very much. Thank
you so much. All right. Now, there’s more, there’s
more, there’s more. So, another thing that
we’re going to do. This is a brand new demo. No one seen this demo. This demo didn’t exist until just like very, very recently. I’m not going to
say 10 minutes ago. This is a brand new hall of the press week
from about to show. So, this year, we’ll
be opening up the, as Amir described, the XMLA
Endpoint for Power BI. So, what XMLA Endpoints
will allow is the Open Platform Connectivity that we have with
analysis services, right? So, as I may have said, analysis services has been a market dominator for 20 years. Every year, basically, every other major BI vendor
on the planet, supports connectivity
to analysis services. Once we open up the XMLA
Endpoint on Power BI, you will be able to connect
to a Power BI Dataset as though it were
an analysis services database. All right. So, our competitive BI
products will be able to connect to them to have that reusable semantic model in an enterprise organization. Obviously, we believe that the Power BI should be
used for visualizations, but it is an open
platform connectivity.>>It’s actually very
important. [inaudible].>>Yeah.>>[inaudible]. You’ll
find that you have users that they love the tool that
they are using right now. Maybe they are using Tableau, maybe they are using other tool, and still want to get
Power BI to be adapted. You can have the data
in Power BI. You can have the users connect
to Tableau to Power BI. Tableau really supports
analysis services of Power BI is going to
look [inaudible]. You can have all the data. I don’t know how to
make my mind or it.>>We have, maybe this mic.>>I’ll use this one.>>All right. I’m not going
to take my shirt off. Can you hear me now? Okay.>>I don’t know what
happened. Okay. So, you can have all the data
be in Power BI have your Tableau users connect
to the dataset that the Power BI Tableau
really supports it because it supports
analysis services. Power BI looks like analysis
services to Tableau. So, you could have the
enterprise model store there. Most of the users use their dashboard in
reporting in Power BI, but those die-hard Tableau
users, you can serve them. You can still support them
because they too work. It will work with tableau,
it will work with any other BI tool
that supports XMLA, which is the vast majority of
the tools in the industry.>>Absolutely.>>That really allows
you to really create that center of gravity data will. Where everything is in one place, managed in one place, govern in one place. Well, elaborated in there. The field don’t leave
those Tableau users out of the loop.>>Absolutely. Another example of this is for the
Management Tools, right? So, we have lots of intellectual property for an
[inaudible] in terms of management, like six or seven
Management Studio, Civil Server Profiler, Civil Service Data Tools
for authoring these models. Here, I have a SQL
Server Management Studio that is connected to, and let me just
open this up here, this is connected to, actually you can see
it better over here. If I try and zoom in, this is actually connected
to a Power BI workspace. All right. This is not connected to an Analyze Surface’s server. This is connected to
a Power BI workspace. This is listing all of the datasets in the
Power BI Workspace. In fact, I have
a copy of it here. So, this is the same workspace with the same
datasets. All right. So, I can now connect from
six other Management Studio, I can manage these datasets, I can set security, I can perform
scripting operations, I can perform
administrative operations, I can perform fine grain data
refresh operations. All right. So, here, I have this dataset and I’m going to go to this table and I’m going to look at, in fact, I think I already
have this window open. These are the partitions
for this table. All right. So, we introduced a feature recently called
incremental refresh, which is also a significant scalability
feature that allows you to not have to refresh all 10 years and 10 billion rows of data
every time you do a refresh. Right. You can just refresh only the data that has changed, which is much more efficient, much quicker, and
much more reliable, and doesn’t depend on volatile connections
to data sources, etc. Now, the way the
incremental refresh works and for the United
Services practitioners, among you, you will probably
be fully aware of this. The way that incremental
refresh works and for Power BI, by the way, it’s very
simply, just set up in Power BI desktop, right? But then when you define that incremental refreshed
policy in Power BI Desktop, and you perform a refresh
in the Power BI service, the way the incremental
refresh works is it generates good old traditional analysis services
partitions, right? So, and it uses partitions
to only refresh the data that has changed based off the incremental refresh policy. So, I can now, for
the first time, connect from Single
Server Management Studio to a Power BI workspace. I can see the datasets. I can break out the tables and I can even see the partitions generated by the Power BI
incremental refresh Feature.>>How about we move to the next reporting because we are
running out of time?>>Yeah. Absolutely. So, I’ll just quickly say that
you can script this out and you can refresh
historical partitions. All Right. Okay. So, that’s XMLA Endpoints. It’s also a bunch of
community tools will also be able to connect to the
Power BI Workspace. Now, the last thing
I’m going to show you is well, a couple of things. First of, this is
the new homepage that we just launch and this allows business users to find what they’re
looking for very easily. All right. It lists
the dashboards, and reports, and workspaces that they are most interested in
based on their usage, and what they have
flagged as favorites. We have a nice search
capability here. So, again, I can find
apps and workspaces, etc. In this case, I’m going
to go to this report. So, as you can see this
is just the report, but it’s it’s
a special type of report, which is, it’s actually
a reporting services report. This is still not yet released. This is still on
a private workspace. So, this is a reporting services
report as you well know, there are countless masses of reporting services artifacts
on-premises today. They will all be able to
mass migrate to Power BI. So, that organizations
can co-locate their business
intelligence artifacts in a single easily
accessible location, the Power BI Workspace. This is, excuse me, this is a pixel
perfect paginated reports. We can export to a variety of formats and as
I may have said, we’ll be able to have
scheduled delivery, etc. So, reporting services and analysis services are
coming to Power BI. We’re bringing the full
enterprise capability to Power BI as a single all-inclusive platform for enterprise and self-service.
Thank you very much.>>Thank you, Christian.>>Thank you.

Leave a Reply

Your email address will not be published. Required fields are marked *