On my way HOME…

I am at the Birmingham airport on my way home from UKOUG Tech 2017. This was my last event for this year. Cannot wait to be home to celebrate the independence of Finland tomorrow. 100 years independent!

My year was busier than ever, I kind of hope next year will be easier but it is not an easy wish because I really enjoyed this year a lot. In addition to my everyday work I have been teaching at the university, studying, researching and speaking in different kinds of events about many different topics. In 2017 I spoke at

UKOUG, Birmingham, UK

DOAG, Nuremberg, Germany

Devoxx Morocco, Casablanca, Morocco

Nordic ACE Tour
Denmark
Sweden
Norway
Finland

HrOUG, Rovinj, Croatia

SIOUG, Portoroz, Slovenia

Oracle Open World, San Francisco, USA

Tajikistan TechDay, Dushanbe, Tajikistan

POUG, Krakow, Poland

Oracle Code Seoul, South Korea

APEX Day, Oracle House, Finland

Oracle Code Bengaluru, Intia

KScope17, San Antonio, USA

E4 2017, Dallas, USA

BGOUG, Bulgaria

OTN EMEA Tour
Baku, Azerbaijan
Madrid, Spain

Riga DevDays 2017, Riga, Latvia

APEX Connect, Berlin, Germany

BIWA Summit, Redwood City, Oracle HQ, USA

 

If I counted correctly that would be 23 events in a year. Now it is time to take a break for holidays. Next year does not start very slowly because I have already agreed to go to Israel in January and Ireland in March. But until that: enjoy December. Let’s hope it will be a white Christmas and that Santa (from Finland!) pays a visit 🙂

 

Cheers,

Heli

 

 

Advertisements

Reporting with Data Modeler 17.3

In version 17.3 there are some changes in the reporting user interface. The reporting itself works as before.

The reporting form has been devided into two tabs: General and Objects.

General includes things that define how the report looks like, including templates and output formats:

reporting1

You can also define the Company name, some reporting preferences, Reports Directory and Saxon XSLT 2.0 jar file path in the General tab. The defaults can be set in Preferences as before:

reporting3

And Objects tab include selecting the source (open design/reporting repository) and configurations (which objects will be included to the report):

reporting2

 

Default System Types Directory

Why would I define the Default System Types Directory in Preferences?

SystemTYpes

If that directory has been defined you do not need to worry about domains etc. when upgrading Data Modeler. All the system type kind of things are automatically available even though you upgrade.Things like:

Report templates
Report configurations
Saved searches
Custom report templates
Domains
RDBMS Sites
Custom Scripts
Types
Global Design Level Settings

And you can also add this directory to version control to be able to keep track of the changes…

 

Sharing Design Properties/Design Level Settings

I have defined the Naming Standards, Templates etc for one design and I would like to have those same settings for another design I am working on. How can I do that?

You can do that using the Export/Import functionality. Open the design that has the right settings. Go to either Design Properties or Design Level Settings and select Export.

Export.jpg

Save the file. Remember to add the extension xml.

Open the design you want to implement those settings. Select Import and find the file with the exported settings. Press Open. Now the Design Properties/Design Level Settings are the same as in the original design.

 

P.S. If you want ALL your designs having the same design level settings simply check the “Use global design level settings” box and no need to do any export/import.

P.S.2. You might be interested in this too: helifromfinland.blog/2017/10/13/default-system-types-directory/

 

Design Level Settings

I am sure you remember there are Preferences and Design Properties in Data Modeler. Preferences affect all the designs in a Data Modeler installation (in that computer) whileas Design Properties affect only that design they are defined for.

 

Now in Data Modeler 17.3 you can find those Design Properties also under Tools menu by a name Design Level Settings:

DesignL0

 

These are the sames you can still find under Design Properties too and you should definitely check them (either under Design Properties or Design Level Settings), because they are important.

DesignL1

 

 

P.S. You might be interested in this too: helifromfinland.blog/2017/10/13/default-system-types-directory/

 

 

Where are my domains?

I just upgraded Data Modeler to the latest release/installed Data Modeler to a new computer but all my domains are gone. Is there any way to get them back or do I need to define them manually again? Of course there is a way!

Select File, Import, Domains.

Domains1

Then find the file your domains are saved to using Open Domain File.

Domains2

Import.

Domains3

Go and check the domains.

Domains4

And there they are.

P.S. You might be interested in this too: helifromfinland.blog/2017/10/13/default-system-types-directory/

 

How can I change the table/column names to uppercase?

When I desing the ER model I want to use mixed case naming but in my tables I want to have all the names uppercase. Is there a way to automatically change the names uppercase? YES!

Upper1

These are my tables and as you can see the names of the tables and some of the columns are very mixed cased notation. What I need now is a Transformation Script:

Upper11

In the Custom Trasformation Scripts I can find a script called Tables to upper case – Rhino.

Upper15

I press Apply. And this is what I get:

Upper2

 

How can I create triggers automatically for the journaling columns?

Every single table in our database has columns USER_CREATED, USER_MODIFIED, DATE_CREATED, DATE_MODIFIED. Is there a way to get Data Modeler to create the triggers for inserting data to those columns? YES!

The thing you need is Table DDL Transformation:

TableDDL1

 

Define a new Table DDL Transformation Script set (in my example Trigger_for_Technical_Column):

TableDDL2

Define a After Create script and write something like this to it (thank you Marko Helskyaho 🙂 ):

/*
variable ddlStatementsList should be used to return the list with DDL statements
that are created by script – as shown below:
ddlStatementsList.add(new java.lang.String(ddl));
other available variables:
– model – relational model instance
– pModel – physical model instance
– table – the table in relational model
– tableProxy – table definition in physical model
*/
var ddl;
var lname;
// trigger name prefix
trgPre = “BIU_”;
// trigger name suffix
trgSuf = “_TRG”;
prompt = model.getAppView().getSettings().isIncludePromptInDDL();
useSchema = model.getAppView().getSettings().isIncludeSchemaInDDL();
if(model.getStorageDesign().isOpen()){
if(useSchema){
lname = tableProxy.getLongName();
}else{
lname = tableProxy.getName();
}
}else{
if(useSchema){
lname = table.getLongName();
}else{
lname = table.getName();
}
}

if(prompt){
ddl= “\nPROMPT Creating Trigger for ‘”+lname+”‘;\n”;
}else{
ddl = “\n”;
}
ddl = ddl +
“CREATE OR REPLACE TRIGGER “+trgPre+lname+trgSuf +”\n”+
” BEFORE \n” +
” INSERT OR \n” +
” UPDATE ON “+lname+” FOR EACH ROW \n” +
” BEGIN \n” +
” IF INSERTING THEN \n” +
” :NEW.DATE_CREATED := SYSDATE; \n” +
” :NEW.USER_CREATED := NVL(WWV_FLOW.G_USER,USER); \n” +
” END IF; \n” +
” :NEW.DATE_MODIFIED := SYSDATE; \n” +
” :NEW.USER_MODIFIED := NVL(WWV_FLOW.G_USER,USER); \n” +
” END; \n” +
” /”
ddlStatementsList.add(new java.lang.String(ddl));

You can immediatelly test it. Select one of your tables as Table To Test (in my example Isa) and press Test. This is what I got:

TableDDL3

Looks what I wanted but how can I use this script?

When generating DDLs select Include Table DDL Scripts tab. Then select the script from Active Script Set list and select those tables you want to execute it:

TableDDL4

OK. And this is what I got:

CREATE TABLE isa (
avain_uusi NUMBER(16) NOT NULL,
jokumuu VARCHAR2(50 CHAR) NOT NULL,
user_created VARCHAR2(50 CHAR) NOT NULL,
user_modified VARCHAR2(50 CHAR),
date_created DATE NOT NULL,
date_modified DATE
);

ALTER TABLE isa ADD CONSTRAINT isa_pk PRIMARY KEY ( avain_uusi );

CREATE OR REPLACE TRIGGER BIU_Isa_TRG
BEFORE
INSERT OR
UPDATE ON Isa FOR EACH ROW
BEGIN
IF INSERTING THEN
:NEW.DATE_CREATED := SYSDATE;
:NEW.USER_CREATED := NVL(WWV_FLOW.G_USER,USER);
END IF;
:NEW.DATE_MODIFIED := SYSDATE;
:NEW.USER_MODIFIED := NVL(WWV_FLOW.G_USER,USER);
END;
/

CREATE TABLE lapsi (
avainlapsi NUMBER(16) NOT NULL,
lisatieto VARCHAR2(50 CHAR) NOT NULL,
avain_uusi NUMBER(16) NOT NULL,
user_created VARCHAR2(50 CHAR) NOT NULL,
user_modified VARCHAR2(50 CHAR),
date_created DATE NOT NULL,
date_modified DATE
);

ALTER TABLE lapsi ADD CONSTRAINT lapsi_pk PRIMARY KEY ( avainlapsi,avain_uusi );

CREATE OR REPLACE TRIGGER BIU_lapsi_TRG
BEFORE
INSERT OR
UPDATE ON lapsi FOR EACH ROW
BEGIN
IF INSERTING THEN
:NEW.DATE_CREATED := SYSDATE;
:NEW.USER_CREATED := NVL(WWV_FLOW.G_USER,USER);
END IF;
:NEW.DATE_MODIFIED := SYSDATE;
:NEW.USER_MODIFIED := NVL(WWV_FLOW.G_USER,USER);
END;
/

ALTER TABLE lapsi
ADD CONSTRAINT lapsi_isa_fk FOREIGN KEY ( avain_uusi )
REFERENCES isa ( avain_uusi );

 

 

How can I get technical columns to the relational model?

I always define columns like USER_CREATED, USER_MODIFIED, DATE_CREATED, DATE_MODIFIED for my tables but I would not like to design them in the logical model. Is there a way to get them automatically to the relational model? YES!

Here’s my logical model. None of my entities has those attributes defined.

TempTable0

 

First design a table that only has those technical columns, nothing else. In my example a table is called TempTable:

TempTable1

 

Then when Engineering to Relational Model select General Options, Use Template Table and select this table from the list:

TempTable2

Engineer.

Now all the tables you just generated to a relational model has these columns:

TempTable3

 

 

 

 

ODC Appreciation Day: JSON Data Guide

There are so many features I like but this feature is something I think people do not know about and it is a feature that will be very valuable for a multi-model database which I believe is the future.

JSON Data Guide allows JSON data to be saved as it is and to be used with SQL. In other words it gives the ability to be able to write without schema but read with schema which is a functionality we need in the world of Big Data.

JSON Data Guide can be either persistent or transient. There are two formats of JSON Data Guide: flat or hierarchical. The hierarchical format includes features like views on top of a set of JSON files and virtual columns.

JSON Data Guide enables FSDM (Flexible Schema Data Management) in a RDBMS. It allows saving data in its original format without missing the opportunity to use that data efficiently. And therefore it is one step closer to a multi-model database.