Tuesday, November 14, 2023
Don't miss the live webinar "Costly Mistakes in Determining Safety Stocks and a New Approach" taking place tomorrow at 9:00am Eastern. In this interactive webinar, we'll demonstrate why not considering profit margin when setting service levels and safety stocks can be a costly mistake. We'll show you firsthand how a new method that accounts for gross margin can better optimize your inventory to maximize profits. Have your own data ready! All we need is a few SKUs' average daily demand, standard deviation of demand, lead time, and gross margin. We'll apply this method to your data live to start increasing your bottom line. Feel free to reach out with any questions before the webinar. Register now to reserve your spot for this free webinar tomorrow November 15 at 9am Eastern: https://lnkd.in/e8-ZDR2J
Monday, November 06, 2023
Join our live, interactive session on Wednesday, November 8 at 9am Eastern where we'll optimize real inventory data right before your eyes. Bring your own data for products you want to optimize. We'll input the numbers into our system during the webinar including: - Daily demand (in Units) - Demand variability (i.e., standard deviation of daily demand) - Lead time (in Days) - Margin - calculated as (sale price - purchase price)/purchase price To give you an idea, the presenter Dr. Jay Zhou will demo the algorithm using data for one of his SKUs: - Daily demand: 100 units - Demand variability: 35 - Lead time: 21 days - Margin: 45% You'll see our optimization algorithm at work as we determine live on the webinar: - Service level targets - Safety stock requirements - Ideal re-order quantities This is a truly hands-on session where you'll get to optimize your own inventory with real recommendations you can implement immediately. Seats are limited, register now to reserve your spot and prepare your data! https://lnkd.in/ekfqzxXX
Tuesday, October 31, 2023
Upcoming Webinar: How to Boost Your Bottom Line: Achieving Greater Profits with Automated Inventory Optimization
Intuition tells us that an inventory item with 65% gross margin, everything else being the same, should have a higher service level/safety stock than one with 10% for the sake of maximizing profit. However, traditional methods fail to quantify exactly how much the service levels/safety stocks should differ between these items. This causes companies millions of dollars in lost profits every year. Our proprietary Safety Stock Price Model, developed by Dr. Jay Zhou, solves this issue. The model accurately calculates the ideal service levels and safety stocks based on your products' gross margins. For the above example, it prescribes service levels of 99% vs 92% and safety stocks of 282 vs 170 units for the 65% vs 10% margin items (see the picture below). By optimizing service levels and safety stock in this way, your company's profit is maximized - exactly what you and your executives want! here for "How to Boost Your Bottom Line: Achieving Greater Profits with Automated Inventory Optimization" on Wednesday, November 1st at 9am ET
Friday, October 27, 2023
Upcoming Webinar on How to Boost Your Bottom Line: Achieving Greater Profits with Automated Inventory Optimization
Upcoming Webinar on How to Boost Your Bottom Line: Achieving Greater Profits with Automated Inventory Optimization -Why Traditional Methods Fall Short and How Our Algorithm is the True Path to Financial Success Will be delivered by Dr. Jay Zhou on Wednesday, November 1st 2023 - 9:00 PM (EDT). Please register here.
Friday, October 20, 2023
Bulk loading volumes of data from staging tables into production tables is a common task, but a slow and tedious process using basic INSERT statements in Oracle. But using query hints can dramatically improve load times, as I recently discovered how dramatical it could be. In our ETL process, we needed to load order data from a staging table to the main orders table. Our initial INSERT statement looked like:
INSERT INTO Orders (order_id, customer_id, order_date, total_amount) SELECT order_id, customer_id, order_date, total_amount FROM Staging_Orders;This took many hours to millions of rows. But then we came across hint syntax that lets you fine-tune SQL behavior. Adding a parallel and direct-path hint like:
INSERT /*+ APPEND PARALLEL(4) */ INTO Orders (order_id, customer_id, order_date, total_amount) SELECT order_id, customer_id, order_date, total_amount FROM Staging_Orders;In the above SQL statement, the /*+ APPEND */ hint instructs Oracle to perform a Direct Path Insert for faster data loading. and the PARALLEL(4) hint instructs Oracle to use parallel processing with four parallel execution servers (you can adjust the number as needed based on your system's resources). The results were mind-blowing - loading finished in under 10 minutes! For big data loads, enabling direct-path INSERT with parallelism cuts the time from hours to minutes. I hope this example gives you some ideas on optimizing bulk data operations in Oracle through hints. Let me know if you have any other tips for supercharged ETL performance!
Thursday, March 16, 2023
Recently, one of my clients loaded a substantial amount of data into an Oracle database table, which consumed an excessive amount of disk space and resulted in sluggish query performance. To address these issues, I suggested implementing Oracle's compress and partition table features. By applying these techniques, we successfully saved disk space and improved query performance. In this example, we will create a table that combines both compression and partitioning with an interval. The table will store a sales transaction dataset, which includes columns for transaction ID, product ID, date, and revenue. We will use range partitioning with an interval of 1 month and enable Advanced Row Compression for the table. Creating the table with compression and interval partitioning:
CREATE TABLE sales_transactions ( transaction_id NUMBER, product_id NUMBER, transaction_date DATE, revenue NUMBER ) PARTITION BY RANGE (transaction_date) INTERVAL (NUMTOYMINTERVAL(1, 'MONTH')) ( PARTITION sales_transactions_initial VALUES LESS THAN (TO_DATE('2023-01-01', 'YYYY-MM-DD')) ) COMPRESS FOR OLTP;In this example, we have created a range-partitioned table with an interval of 1 month. The PARTITION BY RANGE clause specifies that we are partitioning the table based on the transaction_date column. The INTERVAL clause defines a 1-month interval for partition creation using the NUMTOYMINTERVAL function. The table also has an initial partition, sales_transactions_initial, that holds data for dates before '2023-01-01'. Lastly, we have added the COMPRESS FOR OLTP clause to enable Advanced Row Compression for the table. Inserting data into the table:
INSERT INTO sales_transactions (transaction_id, product_id, transaction_date, revenue) VALUES (1, 101, TO_DATE('2022-12-15', 'YYYY-MM-DD'), 1000); INSERT INTO sales_transactions (transaction_id, product_id, transaction_date, revenue) VALUES (2, 102, TO_DATE('2023-01-05', 'YYYY-MM-DD'), 1500); INSERT INTO sales_transactions (transaction_id, product_id, transaction_date, revenue) VALUES (3, 103, TO_DATE('2023-01-20', 'YYYY-MM-DD'), 2000); INSERT INTO sales_transactions (transaction_id, product_id, transaction_date, revenue) VALUES (4, 104, TO_DATE('2023-02-15', 'YYYY-MM-DD'), 2500);Querying the table:
SELECT * FROM sales_transactions WHERE transaction_date >= TO_DATE('2023-01-01', 'YYYY-MM-DD') AND transaction_date < TO_DATE('2023-02-01', 'YYYY-MM-DD');This query retrieves sales transactions for January 2023. Oracle will use partition pruning to read only the relevant partition(s) for the specified date range, improving query performance. By combining compression and interval partitioning, you can optimize storage space, manage large tables efficiently, and enhance query performance in your Oracle database.
Wednesday, June 15, 2022
An intern has joined my client's company to work on a data science task. I will be helping her along the way. She is familiar with Python and R programming languages and has accomplished a number of projects with these tools. However, I have successfully convinced her that it is a good idea to learn database query language SQL and use it to perform most data work. The following are the four reasons that I mentioned:
- Unlike in academic environment, business data are mostly stored in relational databases. It is economical and secure to perform data analytics/data science jobs within databases using SQL.
- SQL language is standard. SQL scripts, with minor or no change, can run on any relational databases such as Snowflake, Oracle, SQL Server, MySQL, Postgresql, etc.
- Most of the data analytics/data science processes can be implemented in SQL and deployed in databases.
- SQL was developed over 50 years ago. It is still widely used and will continue to be so for many years to come. Not many programming language survived such a long period of time. It is a good investment to learn SQL.