ShopSite Tip – Quick Tips For Quantity Discounts

Quantity Pricing Sample

Quantity Pricing Sample

Quantity Pricing is a simple but powerful feature in ShopSite Pro which allows you to provide discounts for large orders, and even discount pricing across multiple products in the same product group.

In this post we’ll cover some tips for configuring as well as common mistakes which can occur.

 

The Settings

Quantity Pricing Settings

[click for larger view]

Quantity Pricing is configured at the product level, on a product’s Edit Product Info screen. In the sample here you’ll see it’s enabled with 3 different pricing quantities configured.

If Quantity Pricing is not working for you, the first thing to check is to make sure the “Check here to turn on Quantity Pricing” checkbox is enabled.  It’s common to miss that setting.

Next, be sure that the “Price/Unit” field is set with the amount you want to charge “Per Item”.  It’s not a total price for the quantity shown, but the price you want to charge per item when that quantity is in the cart.

For example in our sample we charge $2.00/each if you have a quantity of 1-499 in the cart.  But if you have 500-999 in the cart you are only charged $1.50/each.

NOTE: The price shown is charged for the total quantity of that product in the cart.  Ordering 750 will not charge $2.00 for the first 499 products and then $1.50 for 500-750.  Since the quantity is in the 500-999 range all 750 are charged $1.50/each.


Quantity Groups

Quanity Pricing Groups

[click for larger view]

What really expands the Quantity Pricing feature is its ability to classify products into groups.  Let’s say you want to run a special where someone can buy a mixture of 10 or more items at a discounted price:

The first step is to create a group for those products, which is done on the Merchandising Tools > Discounts > Quantity Groups screen.  Shown here is that screen with a “Movies” group created.

Then configure each of those 10 products to have Quantity Pricing enabled and set the 1+ price to the regular retail price and the 10+ price to be your discounted price.

While editing the pricing, set the “Quantity Pricing Group” field on the bottom of that section to the group you created.  You can see that field set to “none” in the first screenshot above.

With that field set, ShopSite will count the quantities of all products in the cart which are assigned to the same group to determine which price they receive.  So if someone buys 5 each of 2 different products in the Movies group they will receive the 10+ discounted price.


Sale Prices

As you may have seen in the first screenshot, Quantity Pricing also supports sale prices, so when the “On Sale Toggle” value (on a product’s Edit Product Layout screen) is enabled the “On Sale Price/Unit” column will be used.  It’s blank in our sample, but you can fill that in if you use the On Sale feature with your products.


Test, Test, Test

When you’re trying a new feature don’t hesitate just to test it.  Add a test page to your store (be sure to give it a filename: ex: “testpage.html”) then add a test product to that page and configure it with quantity pricing and all the settings you want to use.

Then you can visit your test page and your product’s More Info Page see how the pricing displays and test it by adding the product to your cart.  This way you can experiment without using a live product a customer may be trying to order.

 

Google Shopping: Goodbye Product Listing Ads

Google recently launched Shopping Campaigns, a new type of Adwords campaign, which is set to replace Product Listing Ad campaigns (PLA’s).  New Merchant Center feeds are required to use the new Shopping Campaign type.  According to Google, legacy feeds will be migrated to the new campaign type later this Summer (August 2014 ETA).

The new Shopping Campaign type has many benefits for merchants, including:

  • Streamlined Interfaceshopingcart
  • Advanced Product Targeting
  • Custom Labels
  • Negative Keywords

 

Within the new interface, Google has included a complete list of all eligible products in your shopping feed.  That’s right, no more switching back and forth between Adwords and Merchant Center just to verify current product list :)

Having the list in Adwords also helps eliminate any question as to whether your most up-to-date product feed is connected to Adwords.  There is, of course, a short delay before Adwords updates with the latest feed, but it’s worth the wait!

(click to enlarge)

(click to enlarge)

 

Dissecting the ‘All Products’ group is another strength of Shopping Campaigns.  The improved interface is easier and more powerful than its predecessors often irritating task of setting “Auto-Targets”.

When sub-dividing a group, Adwords automatically verifies the attribute exists in your feed.  Then,  it loads the appropriate items into the left window.  To select items for your sub-group, simply drag from the left window to the right window.

(click to enlarge)

(click to enlarge)

Below are the attributes available for sub-dividing the ‘All products’ group:

  • Category
  • Brand
  • Item ID
  • Condition
  • Product type
  • Custom label (0-4)

While there are several new options, it’s important to note that “adwords labels” and “adwords grouping” are no longer available for targeting products.  If you use these fields currently, you will want to migrate those values to one of the newly available custom labels.  Also noteworthy, is that “Category” represents the first level of the Google taxonomy (assigned to each product in your shopping feed).

Item ID works quite well in the new interface and gives merchants the ability to set CPC targets for a single product.  If you recall, this option did not work as expected in the PLA version.

Google also delivers five new custom attributes to help further organize and segment your product list.  Using these new fields of course, requires that you add them to your shopping feed.

To take advantage of custom labels, ShopSite users can leverage Extra Product Fields to send across new product categories, labels, descriptors, etc…

To create Extra Product Fields:

  • Navigate to Preferences > Extra Fields
  • Determine which Extra Product Field is to be used, assign it the name ‘custom_label_0′ (You can add up to 5 custom fields – last one would be named ‘custom_label_4′)
  • Update the ‘Number of product fields to display’, then Save

Assigning Extra Product Fields to the Google Shopping Feed:

  • Navigate to Merchandising > Google Services > Shopping > Configure
  • In the Attributes section, check the boxes next to each custom label you wish to include in the feed, then Save
    (click to enlarge)

    (click to enlarge)

  • Click ‘Send Feed’

The steps required for Magento users will vary based on the extension used in their respective stores.  The steps below apply primarily to those store owners using the core Google API.

  • Create new product attributes under Catalog > Attributes > Manage Attributes.
(click to enlarge)

(click to enlarge)

  • Assign new attribute to the appropriate Attribute Set under Catalog > Attributes > Manage Attribute Sets.
(click to enlarge)

(click to enlarge)

  • Map new attributes to the Google Shopping feed using Catalog > Google Content > Manage Attributes.
(click to enlarge)

(click to enlarge)

  • Synchronize products under Catalog >  Google Content > Manage Items.

Negative Keywords

Using negative keywords helps prevent ads from displaying where they have little opportunity to convert.  While this doesn’t necessarily change your budget, it should help ensure advertising dollars are better spent.  the new interface allows negative keywords to be assigned at the campaign and/or ad group level.  For negative keywords to be successful, you should choose terms used to find your products that also have keywords which do not relate to items you sell.

(click to enlarge)

(click to enlarge)

Be careful not to include a negative keyword or phrase that matches one of your valid keywords.  Doing so will prevent your ads from running!

Here are some links that may be helpful to understanding and using negative keywords:

To read more about the features in Shopping Campaigns, please visit Google.

photo credit

 

How To Set Up Selective Master Slave Replication in MySQL

mysqlThere are a number of tutorials out there for setting up Replication in MySQL. However, I couldn’t find one that fully addressed setting up selective master-slave replication in MySQL.

By selective, I am referring to only having one or a few databases that are replicated from the master database to the slave database. Any other databases on the master server are not copied/replicated.

Master-slave replication for a MySQL database refers to having a secondary MySQL server where any changes made to the main database are replicated (copied) to the secondary MySQL database. It becomes a copy of the main database. This secondary database can be used as a “hot” backup database, or used to run queries against that you don’t want to run on the live database, or used to allow for backups to be made without affecting performance of the live database.

Tutorials out there now

There is a great tutorial for setting up Master Slave Replication for all databases. It is well documented.

Another tutorial that is quite good and is almost complete (with a few typos) is One database set up for master-slave replication.

Then there is the ancient HowToForge MySQL Replication tutorial. It is thorough, but is very out of date.

Each of these tutorials is missing one or more items, or is not clear on some steps, and you can run into issues when setting it up.

Gotchas

There are 4 gotchas when setting up MySQL Master-Slave replication:

  • 1. Using the setting “replicate-do-db” on the slave instance can cause issues with not all queries being replicated. Instead, I recommend using “replicate-wild-do-table” so that all queries (regardless of construct) will be replicated in all scenarios.
  • 2. Only using one “binlog-do-db” line for multiple databases will cause replication to fail. Instead, if replicating multiple databases, have multiple “binlog-do-db” lines, one for each database.
  • 3. Don’t put master settings in the my.cnf configuration file. These settings won’t work in MySQL 5.5 or higher.
  • 4. Using a second window / session correctly for the initial database dump file creation to prevent locking from expiring. Get this wrong and your slave instance will be corrupt.

Steps to set up MySQL master slave replication

ON THE MASTER DATABASE SERVER:

1. The first step is to set up the master database for replication. This can be done while the database server is running. You would edit the my.cnf file on the master database:

log-bin = /var/log/mysql/mysql-bin.log
binlog-do-db=DATABASE_NAME1
binlog-do-db=DATABASE_NAME2
server-id=1

where “DATABASE_NAME1″ and “DATABASE_NAME2″ are the names of the databases you plan to replicate.

2. Next, restart MySQL on the master server.

3. Then, on the master database, log into MySQL on the command line (mysql -p) and issue the following SQL queries:

GRANT REPLICATION SLAVE ON *.* TO 'slave_user'@'%' IDENTIFIED BY 'password';
FLUSH PRIVILEGES;

4. When you are ready to create the database dump of the master database(s), you will run the following commands. You *must* stay logged into this session, and do *not* quit or even issue another command:

USE DATABASE_NAME1;
FLUSH TABLES WITH READ LOCK;

If you have more than one database to replicate, you will need a separate window/session for each database that you stayed logged into. You would repeat these commands in each window for each database, staying logged in after issuing the commands.

5. In a second ssh window, you will log into MySQL again and run the following query and record the values:

SHOW MASTER STATUS;

It should look something like this (You want to record the File value and Position value):

mysql> SHOW MASTER STATUS;
+------------------+-----------+--------------------+------------------+
| File             | Position  | Binlog_Do_DB       | Binlog_Ignore_DB |
+------------------+-----------+--------------------+------------------+
| mysql-bin.000013 | 250789445 | DB_NAME1,DB_NAME2  |                  |
+------------------+-----------+--------------------+------------------+
1 row in set (0.00 sec)

6. The next step is to dump the database(s) from the master server:

mysqldump -p --opt DATABASE_NAME1 >DATABASE_NAME1.sql
mysqldump -p --opt DATABASE_NAME2 >DATABASE_NAME2.sql

You will then want to transfer these files to the slave server, as you’ll use them to seed the slave databases later on.

7. Once you have the dump files, and you recorded the values of the master status, you can unlock the database by going back to the first window (and other window(s) for each database) that is still logged into MySQL and running:

UNLOCK TABLES;
quit;

ON THE SLAVE DATABASE SERVER:

1. Log into MySQL (mysql -p) and set up your databases:

CREATE DATABASE DATABASE_NAME1;
CREATE DATABASE DATABASE_NAME2;
quit;

2. In the my.cnf file on the slave server, add the following lines:

server-id=2
relay-log=/var/log/mysql/mysql-relay-bin.log
replicate-wild-do-table=DATABASE_NAME1.%
replicate-wild-do-table=DATABASE_NAME2.%

This will match any type of query run against the databases, and ensure they are fully replicated to the slave server.

3. Restart MySQL on the slave server.

4. Log back into MySQL (mysql -p) on the slave server and run the following queries. Make sure you stay logged into this session. You’ll need the values you recorded from the master database, as well as IP addresses and usernames/passwords:

SLAVE STOP;
CHANGE MASTER TO MASTER_HOST='1.2.3.4', MASTER_USER='slave_user', MASTER_PASSWORD='password', MASTER_LOG_FILE='mysql-bin.XX', MASTER_LOG_POS=XX;

Where “1.2.3.4″ is the IP of the master server, and “slave_user” and “password” are the username and password of the MySQL user for replication. The “XX” values in the query are those you recorded in step 5 on the master server.

5. In a second window, import the databases:

mysql -p DATABASE_NAME1

6. Back in the first window (that is still logged into MySQL), you may now run the following MySQL queries:

START SLAVE;
quit;

That’s it. The slave server is now running, and should be replicating with the master server.

How to verify the slave server is in sync with the master

This is not an easy thing to do manually. There is no simple command that tells you everything is in sync.

Fortunately, the good folks at Percona have a toolkit that makes this easy to verify. You install it on the master server (simple perl Makefile.PL, make, and make install).

Once installed, you use a separate database (that should also be replicated) to track the sync status. I called ours “percona” and set it up on the master with:

create database percona;
CREATE TABLE checksums (
db             char(64)     NOT NULL,
tbl            char(64)     NOT NULL,
chunk          int          NOT NULL,
chunk_time     float            NULL,
chunk_index    varchar(200)     NULL,
lower_boundary text             NULL,
upper_boundary text             NULL,
this_crc       char(40)     NOT NULL,
this_cnt       int          NOT NULL,
master_crc     char(40)         NULL,
master_cnt     int              NULL,
ts             timestamp    NOT NULL,
PRIMARY KEY (db, tbl, chunk),
INDEX ts_db_tbl (ts, db, tbl)
) ENGINE=InnoDB;

I set this table up to be replicated along with our other databases.

Then, you simply run a command on the master server (via ssh, cron, etc…) to verify all is in sync:

pt-table-checksum --user=XXX --password=YYY --databases DATABASE_NAME1 --nocheck-replication-filters

This will give you an output and indicate if there are any differences between the databases. The column you are most concerned with is “DIFF”, which should be all zeroes if everything is in sync. You may see a number greater than zero in the “ERRORS” column. Many times this can be ignored depending on the details of the error.

—-

Hopefully this helps you set up selective master-slave replication in MySQL without running into corruption or missing queries / data on the slave instance. I’ve found this process to work quite well for the replication set ups we have put in place. I’m sure there are many other ways to skin the cat, but the “gotchas” I listed above are items to consider no matter which plan you implement.

ShopSite Shipping With ShipStation

Your store is online, you’re taking orders, but is shipping them taking too long?  We have a time-saver for you:

ShipStation - Unshipped Orders

Dashboard Showing Orders To Ship

LexiConn has recently released a module to integrate your ShopSite store with ShipStation, a hosted order management system which can print all your shipping labels and even notify your customers of their tracking numbers.

ShipStation - Ship Sample

Printing a Priority Mail Shipping Label

The combination of ShopSite and ShipStation is seamless and allows you to easily print your shipping labels using a variety of carriers.  It can even print your packing slips for you when it prints the labels.

All of your orders will be imported into a dashboard where you can change shipping methods, update customer’s addresses, and even split orders if you want to ship just part of an order.

ShipStation has a free 30 day trial available which you can sign up for here.  For more information about our module, or to have it installed on your LexiConn account, please view the details here.

Not a hosted client? Check out our easy, pain-free transfer process to have your ShopSite store and website moved over to LexiConn with no downtime.

Heartbleed – All LexiConn Servers Patched

heartbleed If you haven’t already seen or heard about Heartbleed, the large vulnerability that affected over half a million trusted websites, here is a synopsis, the status of our servers being patched, and my take on why the sky is NOT falling due to this issue.

LexiConn is Safe

Once this vulnerability was announced, we had all of our affected servers patched within a few hours. Note that the large majority of our servers were not vulnerable to this attack, as they run versions of the OpenSSL software that did not have the bug in them.

What is Heartbleed?

Heartbleed is a bug in the very popular OpenSSL cryptographic library used by many modern servers throughout the world. OpenSSL provides the backbone of the encryption used for SSL (i.e. secure) communications over the web.

The bug allows a would be attacker to access random “chunks” of memory from the server (64 Kb at a time). Over time, an attacker *could* get the secret key for the SSL security, and then use that key to go back and decrypt data they had collected.

The attack was discovered and published on April 7th. The severity of this exploit stems from a random attacker being able to request sensitive memory data without it triggering anything unusual in the server log files. The attacker does not need to be on the server, and it does not require a more complicated “man in the middle” attack vector.

How Real is this Threat?

The media sure likes to jump on a story like this. The “sky is falling” doomsday articles are a bit overblown.

That’s not to say this isn’t a big deal. It is. It needs to be taken seriously, and all internet providers should already be patched.

Once the exploit was released to the public (along with simple code anyone could run to take advantage of this bug), the threat became *VERY* real. Getting patched quickly was the best defense against Heartbleed.

However, here is my take on the odds of a hacker being able to find this vulnerability on his or her own, and then successfully use this information to exploit servers and data…

First, the hacker would have needed to be smart enough to find this vulnerability on their own. It took a team of researchers from a security company and Google to discover the flaw. It likely isn’t something the average hacker would have discovered on their own.

For the sake of argument, let’s say one of the top hackers out there found out about this flaw. They would have needed to keep it quiet (which is certainly possible), as no one else had heard about it before the team announced it a few days ago.

Next, they would have needed to launch a targeted attack against a site or group of sites that they wanted to try and compromise (and were running software vulnerable to this attack). This attack is not easy to exploit fully, as it takes time, skill, and patience to collect 64 Kb of random data, one connection at a time. Each fragment would need to be saved, and each one would need to be examined trying to obtain the secret private key that encrypts the data.

If this secret key were to be pieced back together, then the process of assembling these random fragments into usable chunks and then decrypting them begins. It’s not something you can just write a few lines of code for and hit the jackpot. It requires a great deal of resources and know-how to pull off.

If all of the above did happen, the hacker would probably want to target something big, like a bank or a large site like Yahoo. It’s highly unlikely they would target small business websites, as the effort expended would far outweigh the potential data they might recover.

As you can see, the odds of this being pulled off *BEFORE* the bug was announced to the public are probably quite low. In the neighborhood of being struck by lightning twice in the same spot on the same day kinda odds.

However, *AFTER* the bug was announced, the threat became much more real. With available code, the “script kiddies” could launch these attacks easily, and the potential for sensitive data to be found becomes quite high.

To reiterate, all LexiConn servers and accounts were fully patched against Heartbleed a few hours after the release was made public. The vast majority of our servers were not vulnerable to this attack at any time, so there was only a tiny chance it could have even been exploited in the past.

If you have any questions about this vulnerability as it relates to your account with us, just drop us an email, or give us a call.