I had the pleasure to assist to the first Swiss Oracle User Community conference in Zurich the last two days. Great event the the Oracle offices in the “Circle” at the Zurich Airport. It was opportunity to see some ex-Trivadis colleagues – Ludo, Frank, Dani and meet all the Oracle conference addicts.
The two parallel sessions were creating me every hour a dilemma: which presentation will I miss.
Here some of the highlights I noticed – this are what for me was important or new, certainly I will not cover all the knowledge that was in the air.
Swiss OUC day 1
Dominic Giles (Oracle) gave an overview of 23c new Generative AI features and more:
- New datatype VECTOR for AI data models was introduced and delayed the 23c release. It allows to use ML models and calculate distance between vectors using different methods.
- AI Assist will soon allow to use natural language to create SQL specific for the data model of “our” database. Will also be available on premises – similar but easier than “Select AI” from Autonomous DB
- SQL Developer extension for VS Code allows using DB connections directly on VS Code – already available:
- Create and test Oracle Autonomous Database on your laptop as a podman container
Martin Klier explained well and visually how networks, subnets, routers work and what happens when the way to the destination is cut. Really good reminder.
Øyvind Isene shared his story of being the sole developer of an DB automation API leaving the following messages:
- “You don’t have the time to do all the errors again. Learn from other errors.” –> share your errors with others
- Use Logger when scripting; Use ORDS for DB automation
Again Martin Klier said that Performance is rarely result of an accident. For this, when doing performance analysis, document every action you do, even if will just show that the problem is/was not there; do not use averages and pay attention to skewness.
Swiss OUC day 2
Chris Saxon on a plenum session did a example based overview of the 23c new features for developers:
- new DB_DEVELOPER_ROLE – combination of CONNECT+RESOURCE+some other privs.
- create if not exists, drop if exist DDLs to avoid errors on multiple execution (might hide things with create if exists, but different)
- No more need of “from dual”
- Domain datatype extensions – allow named column types with integrated constraints and defaults used schema wide, with extension to have a display and order specific functions (for instance INTERVAL DAY datatype that displays in minutes)
- Annotations for columns/tables/domains – kind of extension of comments in name-value pair format
- Multiple insert values in a single command – also helpful to generate data
- Group by using column alias
- sum/floor/ceil/avg on interval and other datatypes
- fuzzy_match function to replace like when looking for similar results (without need of Oracle Text)
- json converted to insert
- boolean usage everywhere (eg: function returns boolean then possible to do “select … from … where func_name(input)”
- SQL_TRANSPILER parameter injects content of function directly in SQL, allowing the optimizer to better improve query (default false)
- GRANT … ANY … on Schema
- Remind to use NOAUTHENTICATION for owner schema and proxy users instead of share passwords.
Mike Dietrich went on with 23c new features on Data Pump:
- New dumpfile format with special functionalities for OCI Object Storage
- INCLUDE and EXCLUDE can be combined in the same impdp/expdp
- All transportable export/import can run now in parallel
- Checksum for dumpfiles
- REMAP_TABLESPACE=%:USERS (wildcard usage)
- Recommendation to gather SYS+SYSTEM schema stats instead of using “gather_dictionary_stats”
- Always install DATAPUMP merge patch. It is not part of an RU because it it not RAC Rolling installable
Neil Chandler shared the story of a bank that wanted to replace end-of-life Exadata with something cheaper. Neil spend some time investigating all resource usage – mainly the CPU from storage servers + saved IO from cell offloading and then calculating costs. Conclusion: if they want to keep the performance, there is no cheaper option by using commodity hardware or open source DB.
Frank Pachot on a tmux terminal session explained that the ‘load’ high values in the ‘top’ command from Linux do not mean the machine is overloaded. Basically my get ways were :
- There is in recent kernel the PSI tool, not enabled by default, which can show better weather there is pressure and where this is happening.
- Different types of IO – specifically a type of direct io can count for the load avg.
Fernando Simon moved from Exadata to ExaCC and shared his thoughts about the new experience:
- ExaCC offers good automation, click type of actions for inexperienced DBAs
- ExaCC lacks basic checks like if there is enough space to backup OH before a patch [why it backups OH?, does not do out-of-place?]
- No integration of created OH images
- CPU scale up works well and is immediate, but scale down has a hard minimum wait time of 2h since last scale up
- There is no documented tool that allows create a DB with all parameters (block size, charset, silent mode with password)
- Even Zero Downtime Migration tool makes as workaround to create a dummy DB, drop it manually and create manually again using DBCA. DBs created directly with DBCA, will not be seen in the OCI, there is no ‘discovery’ command
- No possibility to use DBs with different domain in the name in the same ExaCC without unofficial workarounds
- ExaCC makes all admin operations serial which can be slow
Dani Schneider showed the new SQL Analysis Report and how it allows to get hints of bad sql when running explain plan.
Kamil Stawiarski presented “Jas-min” – a word play with the name of one conference organizer – JSON AWR Statspack Minier. And how this tool can help in cases like you have flashback queries and cursor: pin s wait on x. Basically, do not use use flashback queries in the application if the query will be use concurrently
Thanks a lot Gianni, Jasmin and Julian from Swiss Oracle User Community for this great conference. Hope to see you again next year!